report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
The Columbia River Basin is North America’s fourth largest, draining about 258,000 square miles and extending predominantly through the states of Washington, Oregon, Idaho, and Montana and into Canada. There are over 250 reservoirs and about 150 hydroelectric projects in the basin, including 18 mainstem dams on the Columbia and its primary tributary, the Snake River. One of the most prominent features of the Columbia River Basin has been its production of salmon. Specifically, the Columbia River Basin provides habitat for five species of anadromous salmon: chinook, coho, chum, sockeye, and pink. Salmon spawn in fresh-water rivers and their tributaries. Juvenile salmon live in the fresh water for a year or two, migrate to and mature in the ocean, and return in 2 to 5 years to their place of birth as adults to spawn. About 150 years ago, the Columbia River Basin returned the largest adult runs of wild salmon on earth—their annual populations were estimated at up to 16 million salmon. Since that time, however, total annual salmon returns have declined to only about 2.5 million in 1996. It is estimated that only about 500,000 of these returning adults are wild or naturally spawned fish. The remainder are hatchery-raised fish intended to supplement the declining wild runs. A number of factors have contributed to the decline of salmon stocks in the Columbia and Snake rivers. These include overharvesting in the late 1800s and the early 1900s, as well as the adverse effects on spawning habitat from farming, cattle grazing, mining, logging, road construction, and industrial pollution. A variety of ocean conditions including currents, pollution, temperature changes, and nutrient base, also affects the survival of salmon. In addition, dams have a significant impact on declining salmon stocks, particularly those dams that limit access to spawning habitat and those through which fish passage is provided but at reduced levels in comparison with natural conditions. However, most of the decline in wild salmon stocks—from the estimated 16 million in the mid-1800s to about 4 million in 1938—occurred before the first federal dam was completed in the Columbia River Basin in 1938. The Federal Columbia River Power System (the Columbia power system) includes all federally owned hydroelectric dams in the Columbia River Basin that are operated and maintained by the U.S. Army Corps of Engineers and the Department of the Interior’s Bureau of Reclamation. These include 21 Corps dams and 8 Bureau dams. The Bonneville Power Administration (Bonneville Power) is responsible for transmitting and marketing the hydroelectric power generated by this system. Of the 21 dams operated and maintained by the Corps, eight are major, multipurpose dams located on the lower Columbia and Snake rivers that affect the habitat and migration of salmon. These are Bonneville, The Dalles, John Day, and McNary on the lower Columbia and Ice Harbor, Lower Monumental, Little Goose, and Lower Granite on the Snake. These dams are a major source of hydroelectric power in the region and also provide flood control, navigation, recreation, irrigation, municipal and industrial water supply, and fish and wildlife benefits. However, the dams impede the migration of juvenile and adult fish to and from the ocean by their physical presence and by creating reservoirs. Reservoirs formed behind the dams slow water velocities, alter water temperatures, and improve the habitat of predators. The Corps has adult fish ladders at all eight of its dams on the lower Columbia and Snake rivers. Adult fish ladders were integrated into the design of the dams beginning with Bonneville in 1938. These ladders consist of a series of steps and water pools that provide a gradual upward climb over the dams for returning adults. To steer the adults to the ladders, “attraction” flows at the downstream ladder entrances simulate conditions that would be found at the base of natural waterfalls. The concept has proved effective for adult fish passage. Generally, juvenile fish can migrate downstream past the dams by several routes, including through the dams’ turbines, through the dams’ juvenile fish bypass systems, or over the dams’ spillways. The Corps has juvenile fish bypass systems in place at seven of its eight dams. At The Dalles Dam, juvenile fish are bypassed through the dam’s ice and trash sluiceway—a waterway used to pass ice and trash around the dam. While each alternative passage has associated risks and contributes to fish mortality, passage through the bypass system or over the spillway has a lower mortality rate than through the turbines. Many juvenile fish are also collected and transported past the dams by barge and truck under the Corps’ juvenile fish transportation program. The conventional juvenile fish bypass systems at the Corps’ dams guide fish away from turbines by means of submerged screens positioned in front of the turbines. The juvenile fish are directed up into a gatewell, where they pass through orifices into collection channels that transport the fish around the dam. The fish are then routed back out to the river below the dam, which is called “bypassing”; at the four dams with fish transport facilities, fish can be routed to a holding area for loading on to specially equipped barges and trucks for transport downriver to below the Bonneville Dam—the last dam on the lower Columbia River before the Pacific Ocean. Three of the Corps’ four Snake River dams and the McNary Dam on the Columbia River have fish transportation facilities. The percentage of fish approaching a turbine intake that are guided by submerged screens into facilities that bypass the turbine is called fish guidance efficiency. This percentage varies from dam to dam and by type of fish. For example, according to the Corps, the current bypass systems for juvenile fish guide 60 to 70 percent of spring/summer chinook salmon away from the turbines and up through the bypass channel. However, the fish guidance efficiency for fall chinook salmon is only about 30 percent because they are smaller, swim deeper in the river, and migrate in different water conditions. Dams equipped with extended-length screens can guide up to 66 percent of fall chinook into bypass systems. Hydropower operations can be modified to improve in-river migration conditions for fish. During the juvenile fish migration season, from April until fall, water can be spilled at the dams and flows in the river can be augmented to aid juvenile fish migration. One operational measure designed to improve salmon passage at the Corps’ dams is to spill water and juvenile fish over the dams’ spillways, rather than putting the water through the powerhouses’ turbines to produce electricity. However, spill has associated risks because when the water plunges into the spillway basins, it traps gases, such as nitrogen. Water that is supersaturated with nitrogen can be lethal to both adult and juvenile fish. Spillway deflectors have been installed at seven of the Corps’ eight dams to limit the plunge depth of spilled water, thereby reducing the amount of supersaturated gases. Another operational method of improving in-river fish migration is flow augmentation. Upstream storage dams hold water for flood control and other uses, interrupting the river’s historical seasonal flow patterns. Seasonal releases of water from upstream storage dams, called flow augmentation, can aid salmon migration. The Corps operates two upstream storage dams in the Columbia River Basin, Dworshak Dam in Idaho and Libby Dam in Montana, from which water is released to aid juvenile fish as they migrate downriver. Since 1949, federal and state agencies and regional organizationsresponsible for efforts to enhance salmon have invested over $3 billion in actions to improve salmon runs throughout the Columbia River Basin. Despite the studies and actions taken to improve fish passage, salmon runs in the Columbia River Basin have continued to decline: returning adult populations totaled about 4 million in 1938, 3 million in 1980, and 2.5 million in 1996. Over the past several decades, various federal and state agencies, the courts, and other entities have shaped the development and management of salmon policy in the Columbia River Basin. During the early period of the construction of the Corps’ eight dams on the lower Columbia and Snake rivers, the state fisheries agencies, universities, and the U.S. Bureau of Fisheries (later called the U.S. Fish and Wildlife Service) conducted most fisheries research in the Columbia River Basin. In the early 1950s, the Corps’ North Pacific Division (currently the Northwestern Division) organized the Fisheries Engineering Research Program, which—in coordination with the directors of federal and state fisheries agencies—reviewed research and discussed additional concerns and research opportunities to improve fish passage. Most early studies focused on adult migrants. By the late 1950s, the program’s attention was drawn to studying the survival of juvenile fish and their diversion away from turbine intakes. In 1968, the Corps funded an experiment by the Department of Commerce’s National Marine Fisheries Service (NMFS) at Ice Harbor Dam, using trucks to transport juvenile salmon around the four completed lower Columbia River dams. Encouraging results led to the installation of juvenile fish bypass systems that enable fish collection and transportation at some of the Corps’ dams. The development of screens to divert juvenile fish from the turbine intakes began in 1969, and further research provided the basis for the modification of river flows and dam operations beginning in the 1980s. By the mid-1980s, the Corps developed its Columbia River Fish Mitigation Project to reduce the mortality of juvenile salmon. This project is part of the Corps’ larger Columbia River Salmon Program that includes river operations, fish passage operations and maintenance, fish transportation, research, hatchery operations funded through the Corps’ operations and maintenance appropriation, and fish passage improvements. The Corps’ Columbia River Fish Mitigation Project includes individual actions related to the design and construction of improvements to fish passage facilities as well as studies that support the Columbia power system’s long-term decisions on the system’s configuration and operation. Today, the Corps refers to these fish mitigation actions collectively as the Columbia River Fish Mitigation Project. However, for purposes of this report, we refer to the Corps’ Columbia River Fish Mitigation Project as a program and individual fish mitigation actions as projects or studies. In 1980, the Congress passed the Pacific Northwest Electric Power Planning and Conservation Act, now called the Northwest Power Act, which envisioned salmon as an equal partner with hydropower operations for dams in the Columbia River Basin. The act called for elevating energy and fish planning to a regional level by establishing greater involvement of state and local governments, Native American tribes, and the public in power planning through an interstate Pacific Northwest Electric Power and Conservation Planning Council—now called the Northwest Power Planning Council. The Council, which consists of two members from each state appointed by the governors of Washington, Oregon, Idaho, and Montana, was formed in 1981. The act directed the Council to ensure an adequate long-term supply of power for the Pacific Northwest and to develop a basinwide comprehensive Fish and Wildlife Program to rebuild resources that had been harmed by hydroelectric development. While the act gave the Council the authority to plan, the primary implementors and funding source for the Fish and Wildlife Program are federal agencies. Under the act, federal agencies that manage, operate, or regulate hydroelectric facilities in the Columbia River Basin are required to take the program into account “. . . at each relevant stage of decisionmaking processes to the fullest extent practicable.” These obligations are intended to help integrate federal agencies’ fish mitigation actions with a regionally supported fish and wildlife program. In 1982, the Council completed its first Fish and Wildlife Program. During 1982 through 1994, the program was amended several times, calling for an integrated approach to fish restoration efforts, designating “protected areas” for fish and wildlife, adopting a mainstem-dam spill agreement, and concentrating on improving the survival of juvenile salmon migrating downstream. Other key entities in salmon recovery efforts in the Pacific Northwest are the Native American tribes. Tensions between Native Americans and other users of the Columbia River Basin have existed since before the 19th century. In the mid 1800s, the federal government negotiated treaties with the Native Americans in the Columbia River Basin which granted the Indians the right to take fish at all the usual and accustomed fishing grounds and stations in common with all citizens of the Territory. Although relations improved in the 1980s, today, the Native Americans, with some support, generally argue that salmon recovery can be accomplished most efficiently by returning the Columbia and Snake rivers to “natural” flow conditions and that interim expenditures that evaluate other potential remedies are unnecessary and costly. Specifically, the Native American tribes call for the removal of a portion (breach) of the Corps’ four dams on the Snake River and support releases of water to increase river flows to aid salmon migration. The tribes also support the use of hatcheries to rebuild salmon runs. The tribes are opposed, however, to the Corps’ programs that transport juvenile fish past the dams. Transportation of fish, some tribes argue, is unnatural. In March 1990, a regional Native American tribe, the Shoshone-Bannock, petitioned NMFS to list the Snake River sockeye salmon as endangered under the Endangered Species Act. Later in 1990, a coalition of environmental groups requested protection for the spring/ summer and fall runs of the Snake River chinook salmon and the lower Columbia River coho salmon. In 1991, NMFS declared the Snake River sockeye salmon as endangered under the Endangered Species Act. In 1992, NMFS declared the spring/summer and fall runs of the Snake River chinook salmon as threatened. This Endangered Species Act listing required the Corps, Bonneville Power, and the Bureau of Reclamation to consult with NMFS to determine whether river flow improvements and planned fish mitigation measures associated with the operation of the Federal Columbia River Power System would further jeopardize the existence of the listed species. Under the Biological Opinion, the Columbia power system encompasses those dams and reservoirs owned and operated as a coordinated system for the purpose of power production by the three action agencies (the Corps, Bonneville Power, and the Bureau of Reclamation) on behalf of the federal government. For purposes of the Biological Opinion, these dams and reservoirs are the Dworshak, Lower Granite, Little Goose, Lower Monumental, and Ice Harbor in the Snake River Basin; Hungry Horse, Libby, and Grand Coulee on the upper Columbia River; and McNary, John Day, The Dalles, and Bonneville on the lower Columbia River. The Biological Opinion takes into account the operation of these dams as a unified hydropower system and as individual projects. For example, flow augmentation, the survival of juvenile and adult salmon, and total dissolved gas issues can involve both the hydropower system as a whole or just individual dams in any given case. Previous Biological Opinions issued by NMFS in 1992, 1993, and 1994 (the 1994 Opinion addressed the operations of the hydropower system through 1998) stated that the proposed operations of the Columbia power system during those years would not jeopardize the continued existence of Snake River salmon. NMFS’s 1993 Biological Opinion finding of “no jeopardy” was challenged in U.S. District Court by the Idaho Department of Fish and Game, the State of Oregon, and four Native American tribes. On March 28, 1994, the court ruled that NMFS’ 1993 Biological Opinion was inadequate because it relied too much on the status quo for improving listed stocks of salmon that continued to dwindle in numbers. The 1993 Biological Opinion dealt with the operation of the Federal Columbia River Power System in 1993, a system that had been completed by the time of the court’s decision. Thus, the court permitted NMFS, the Corps, and the Bureau of Reclamation to address the court’s concerns by reconsidering the March 16, 1994, Biological Opinion. In accordance with the court’s decision, on March 2, 1995, NMFS issued a Biological Opinion on the operation of the Columbia power system for 1995 and future years. The 1995 Biological Opinion concluded that the proposed operation of the hydropower system, which included planned fish mitigation actions, was likely to jeopardize the continued existence of the listed Snake River salmon protected under the Endangered Species Act. NMFS recommended a “reasonable and prudent” alternative that included immediate, intermediate, and long-term actions concerning the operation and configuration of the Columbia power system to avoid jeopardizing the protected salmon. Subsequently, the Corps issued a Record of Decision that stated its intention to carry out the reasonable and prudent alternative contained in the 1995 Biological Opinion. The Corps’ Columbia River Fish Mitigation program was initiated in the mid-1980s to focus efforts on finding ways to improve fish passage at the Corps’ eight dams on the lower Columbia and Snake rivers. The program has evolved into a regionally coordinated direction for the Corps’ actions in the furtherance of both regional and NMFS fish mitigation efforts. The fish mitigation program is the largest construction program in the history of the Corps’ Northwestern Division. The Corps’ current estimates place the cost to complete the program by the end of fiscal year 2007 at $1.4 billion. The fish passage structural improvements done under the fish mitigation program are considered civil works projects and, as such, would normally follow the Corps’ standard procedures for project management. The life cycle of a civil works project passes through two distinct phases—general investigations and construction. The general investigation phase of a project is intended to review and evaluate alternatives to a project and to prepare the National Environmental Policy Act documentation needed for a project to proceed to construction. The general investigation phase of a major federal project can cost millions of dollars and take years to complete. The construction phase of a project incorporates the traditional engineer-construction activities. There are three primary elements: the feature design memorandum, plans and specifications, and construction. The feature design memorandum evaluates the project’s individual elements, describes the detailed design alternatives, and identifies the selected design for incorporation into the total design package. Plans and specifications are the engineering drawings, calculations, standard documents, and engineers’ estimates, which, when assembled, are the documents used by the construction contractor to build the project. Finally, construction of a Corps project usually involves many specialty subcontractors managed by a general contractor who is responsible for the construction of the overall project. Generally, the Corps’ fish mitigation projects on the Columbia River have been multiyear projects. Concerned about how well the U. S. Army Corps of Engineers was implementing its Columbia River Fish Mitigation program at its dams on the lower Columbia and Snake rivers in the Pacific Northwest, Senators Max S. Baucus, Patty Murray, and Harry M. Reid asked that we provide information on (1) the Corps’ decision-making process for identifying, setting priorities for, and funding fish mitigation actions and (2) whether the Corps has completed its fish mitigation actions on schedule and within budget. In addition, we were asked to determine why the Corps had not entered into a direct funding agreement with the Bonneville Power Administration for certain costs of operating and maintaining the Corps’ dams in the Columbia River Basin. During the course of our audit, the Corps did complete such an agreement. Appendix I of this report provides information on how the direct funding agreement will work. To provide information on the Corps of Engineers’ decision-making process for identifying, setting priorities for, and funding fish mitigation actions, we interviewed and obtained documents and data from officials at the Corps’ Northwestern Division and District offices in Portland, Oregon, and Walla Walla, Washington; National Marine Fisheries Service officials in Portland, Oregon; and additional Regional Forum members, such as the Columbia River Inter-tribal Fish Commission and staff of the Northwest Power Planning Council. We reviewed the Memorandum of Agreement between the Department of the Army, the Department of Commerce, the Department of Energy, and the Department of the Interior concerning funding of fish mitigation actions and the Regional Forum’s procedures and minutes of meetings. We also reviewed a June 13, 1997, report prepared by Science Applications International Corporation and HDR Engineering, Inc., for the Department of the Army, Seattle District, Corps of Engineers, entitled Independent Review and Evaluation of Processes Utilized to Implement Structural Improvements at Columbia and Snake Rivers Fish Passage Projects. To determine whether the Corps of Engineers completed its fish mitigation actions on schedule and within budget, we initially relied on officials at the Corps’ Northwestern Division in Portland, Oregon, and its Portland and Walla Walla District offices to identify fish mitigation actions that were delayed and/or had incurred cost increases as of October 31, 1997. To determine the actual length of any delay and the amount of any cost increase, we reviewed individual project and study contracts, contract modifications, and reports and interviewed project managers, program managers, and Corps construction personnel to obtain planned completion dates and cost estimates. We then compared the planned completion dates and cost estimates to the scheduled completion dates and cost estimates as of October 31, 1997. We also reviewed NMFS’ March 1995 Biological Opinion, attended meetings of the Regional Forum, and reviewed the minutes and documentation of various Regional Forum meetings discussing fish mitigation implementation actions. The Corps officials at the Northwestern Division and District offices identified 58 fish mitigation actions as of October 31, 1997. Of these 58 actions, Corps officials identified 19 projects and studies that experienced delays, cost increases, or both. To determine why these actions had encountered delays and/or cost increases, we reviewed documentation, including feature design memorandums, construction contracts, contract modifications, correspondence between the Corps and its contractors, funding and priority schedules, and other relevant reports. To obtain additional information on the reasons for cost increases and/or delays and to determine the impacts of the delays and/or cost increases on fish mitigation actions, we discussed the status of each activity with Corps personnel, such as project managers, contract and construction personnel, and fisheries biologists. To determine how the Corps’ recent direct funding agreement with the Bonneville Power Administration for the power costs of operating and maintaining the Corps’ dams will work, we interviewed and obtained documents from officials at the Bonneville Power Administration in Portland, Oregon; the Corps of Engineers headquarters in Washington, D.C.; and the Corps’ Northwestern Division and District office in Portland, Oregon. We reviewed the Corps’ current budget process, operations and maintenance budget needs, and prior direct funding agreements with Bonneville Power. We also reviewed Bonneville Power’s funding requirements for reimbursing the Corps for power-related operations and maintenance costs. Finally, we interviewed officials of the Northwest Power Planning Council in Portland, Oregon, and Bureau of Reclamation officials in Boise, Idaho, for their views on direct funding for power-related operations and maintenance costs. We performed our audit work from July 1997 though March 1998 in accordance with generally accepted government auditing standards. GAO provided the Department of the Army with a draft of this report for its review and comment. The U.S. Army Corps of Engineers, in commenting for the Department, stated that it agreed with the statements contained in the draft report and had no comments. (See app. II.) Since 1995, the Corps’ efforts to mitigate the decline of salmon stocks on the lower Columbia and Snake rivers have been guided by NMFS’ 1995 Biological Opinion. Many of the monitoring, evaluation, research, design, and construction projects and studies identified in the Biological Opinion are included in the Corps’ Columbia River Fish Mitigation program. The Corps’ decision-making process for selecting, setting priorities for, and funding specific fish mitigation projects and studies is a cooperative effort between the Corps and regional interests and is known as the Regional Forum process. The Regional Forum is a group with broad regional representation, including federal agencies, states, and Native American tribes from the Columbia River Basin. The Forum, which includes the Corps, tries to reach consensus among its members in making decisions on fish mitigation actions. However, if consensus cannot be reached, the Corps, as the action agency responsible for implementing its fish mitigation program, makes the decisions. Annually, the Corps, with input from the Regional Forum, estimates the costs of its fish mitigation actions and requests funding for their implementation as part of its normal budget process. If the Congress appropriates less money than the Corps requests, the Corps seeks recommendations from the Regional Forum to help the Corps make its decisions on which projects and studies should be funded, at what levels, and in which years. In March of 1995, NMFS issued its Biological Opinion on the operation of the Federal Columbia River Power System proposed by the Corps, Bonneville Power, and the Bureau of Reclamation for 1995 and future years. The Biological Opinion concluded that the proposed operation, which included planned mitigation activities, was likely to jeopardize the continued existence of the three species of Snake River salmon protected under the Endangered Species Act. Pursuant to the act’s requirements, the Biological Opinion recommended a “reasonable and prudent” alternative to the proposed hydropower system’s operation. NMFS concluded that implementing the reasonable and prudent alternative would not jeopardize the survival of the listed salmon. The reasonable and prudent alternative includes time frames for completing certain fish mitigation projects and studies and identifies the Corps as one of three action agencies responsible for implementing the fish mitigation activities identified in the Biological Opinion. Bonneville Power and the Bureau of Reclamation are the other action agencies. In response to the Biological Opinion, in March 1995, the Corps issued its Record of Decision for Reservoir Regulation and Project Operation, 1995 and Future Years. In the Record of Decision, the Corps stated its intention to carry out the requirements of the Biological Opinion. The Corps carries out many of the measures it is responsible for under the Biological Opinion through its Columbia River Fish Mitigation program. While the Corps has been conducting salmon mitigation efforts under its fish mitigation program since the mid-1980s, currently, the primary focus of the program is the implementation of the actions specified in the Biological Opinion. Some operational measures called for in the Biological Opinion, such as river flow augmentation, spill, and juvenile fish transportation, are implemented by the Corps, but not as part of the Columbia River Fish Mitigation program. The fish mitigation program includes projects related to the design and construction of fish passage facilities, as well as studies that support long-term configuration and operational decisions for the hydropower system. The Biological Opinion identifies immediate, intermediate and long-term actions designed to improve the operation and configuration of the hydropower system for the benefit of salmon. It employs an approach that calls for taking immediate and intermediate actions to increase salmon survival while conducting other activities to determine the benefits of, need for, and feasibility of long-term structural modifications to the hydropower system. In keeping with this strategy, the Biological Opinion required the Corps to take a variety of actions. Some of these consist of designing and constructing facilities to improve salmon passage at the Corps’ dams. Other actions are operational in nature, such as augmenting river flows to aid the migration of juvenile salmon. Finally, some actions consist of conducting studies and collecting the information needed for decisions on the hydropower system’s long-term configuration. It should be noted that the Biological Opinion is a mitigation plan whose required actions are designed to avoid jeopardizing the continued existence of listed species. Although the required actions will generally benefit many anadromous fish in the Columbia River Basin, the Biological Opinion is not a salmon recovery plan. A recovery plan has a goal of returning the listed species to a point where protection under the Endangered Species Act is no longer necessary. Augmenting Columbia and Snake river flows to help juvenile salmon migrate downstream, which requires releasing water from upstream storage reservoirs during the spring and summer juvenile salmon migration. Spilling river flows at the Corps’ dams rather than passing them through hydropower turbines where juvenile salmon experience higher mortality rates. Collecting juvenile salmon at certain of the Corps’ dams and transporting them downstream by barge or truck, past remaining dams, where they are released back into the Columbia River. Evaluating the feasibility, costs, and benefits of drawing down certain reservoirs behind the Corps’ dams to levels significantly below normal operating range. Designing and testing surface collection facilities at certain dams, a relatively new technology that may more efficiently and effectively bypass juvenile salmon at the dams. Conducting studies and making facility improvements that will achieve an 80-percent fish passage efficiency (the percentage of fish that pass dams without going through turbines) and an overall 95-percent passage survival rate at each dam. Developing a gas abatement program, including appropriate structural modifications, to reduce gas supersaturation. Prototype testing and installation of extended-length screens to direct juvenile salmon away from turbines. Planning and implementing improvements to the juvenile bypass facility at Lower Granite Dam on the Snake River. Designing and constructing facilities at John Day and Bonneville dams to improve sampling and monitoring of juvenile salmon as they migrate past these dams. Relocating the outfall structure from which juvenile salmon exit the bypass facility at Bonneville Dam to reduce mortality caused by predator fish. Designing and installing a juvenile bypass system at The Dalles Dam. Determining the appropriate number and size of additional transportation barges to provide direct loading of juvenile salmon, a measure designed to avoid the stress associated with keeping juvenile salmon in holding areas until barges are available. In addition to these immediate and intermediate actions, the Biological Opinion also called for decisions on the long-term operation and configuration of the hydroelectric power system. For example, the Corps is currently studying three alternatives for the long-term operation of its four dams on the lower Snake River. Two of these alternatives would require major system configuration changes. The alternatives under consideration are (1) maintaining current structures and operations as prescribed in the Biological Opinion, including juvenile fish transportation and improvements to existing bypass facilities; (2) permanently drawing down the reservoirs behind the four dams to natural river levels by removing a section of each dam; and (3) making major system improvements other than drawdown, such as construction of new surface bypass facilities, structural measures to reduce gas supersaturation, and improvements to turbines to reduce salmon mortality. The Biological Opinion provides for the Corps to make a recommendation in 1999 on which of the alternatives is preferred. The Corps is also considering long-term options for fish passage at dams on the lower Columbia River. These options include installing surface bypass collection facilities at the Corps’ dams and drawing down the reservoir behind John Day Dam to the level of the spillway or to the natural river level. These decisions are not part of the 1999 scheduled recommendation for the operation of the lower Snake River dams. The Corps’ decision-making process for selecting, setting priorities for, and funding specific fish mitigation projects and studies is a cooperative effort between the Corps and the Regional Forum. In 1995, NMFS, noting the disjointed nature of previous efforts to help the salmon recover, stated that institutional, jurisdictional, state, and federal boundaries make timely fisheries management decisions difficult and that the differing objectives of each organization lead to conflicts in interpretation, lengthy arguments, and decision paralysis. Regional salmon recovery experts recognized that an organization was needed to efficiently manage the salmon recovery program throughout the Columbia power system, and considering its role for listed salmon stocks under the Endangered Species Act, NMFS led this regional effort. As a result, the Corps, NMFS, and the U.S. Fish and Wildlife Service adopted a joint policy that provided for participation by appropriate regional agencies and affected interests in the review and implementation of fish mitigation actions. Historically, the Corps has coordinated with regional interests its research, design, and construction activities related to improving fish passage at its dams. The Corps reiterated that it would work in a cooperative regional approach in its Record of Decision issued in response to NMFS’ 1995 Biological Opinion and in a Memorandum of Agreement among the Department of the Army, the Department of Commerce, the Department of Energy, and the Department of the Interior. The agreement sets forth Bonneville Power’s responsibilities for funding fish and wildlife actions and reinforced the roles and responsibilities of regional interests in setting priorities and budgeting for these actions. The Corps’ and other federal agencies’ (NMFS, Bonneville Power, Reclamation, and the Fish and Wildlife Service) commitment to a cooperative regional approach in the federally led salmon recovery efforts has evolved into the Regional Forum. The Regional Forum develops policy guidelines, sets priorities for selecting and funding projects, and reviews project proposals for the salmon mitigation efforts in the Columbia River Basin related to the operation and configuration of the Federal Columbia River Power System. Membership in the Regional Forum is open to five federal agencies, including the Corps, five states, the Northwest Power Planning Council, Columbia River Basin Native American tribes, a private utility, and public utilities. The Regional Forum tries to reach a 100-percent consensus among its members in making decisions concerning fish mitigation actions. However, if consensus cannot be reached, the Corps makes the decisions on actions contained in its fish mitigation program. Details on the Regional Forum’s membership, goals, and organizational structure are provided in appendix III of this report. The Corps coordinates its fish mitigation actions through the Regional Forum. Specifically, the Corps’ Walla Walla and Portland District offices are responsible for implementing the Columbia River Fish Mitigation program. These offices develop the proposals, including the scope, costs, and schedules, for the projects in the fish mitigation program. They do this by initially making proposals to the technical committees that provide support to the Regional Forum. For example, the Fish Facilities Design Review Work Group reviews proposals for fish passage projects. The District offices can propose projects and suggest changes in funding levels at any time during the year. Other members of the Regional Forum are also free to propose projects; however, this is not very common. After the proposals have been discussed and reviewed by the technical committees, they are evaluated by the Regional Forum’s System Configuration Team. The configuration team is a technical group responsible for planning and overseeing the fish passage structural improvements and related studies called for in the Biological Opinion. During the spring of each year, the configuration team begins discussing and refining a list of projects to be undertaken in the fiscal year beginning in about 18 months. After the configuration team completes its review and develops its recommendations on which projects and studies to fund, the appropriate Corps district offices make formal cost estimates for the actions and provide them to the Corps’ Northwestern Division as part of the district’s overall operating budget. The division then compiles the budgets from each district and packages them into a division budget request that is submitted to Corps headquarters by the end of June. This is the basis for the fish mitigation program actions and budget request for the fiscal year beginning in about 15 months. The Corps’ Columbia River Fish Mitigation program is funded by annual appropriations from the Congress. Specifically, funding for the fish mitigation program is provided through the Corps’ “construction, general” appropriation. The Corps receives additional funding for the operations and maintenance of fish passage facilities and for the transportation of juvenile salmon through the Corps’ “operations and maintenance, general” appropriation. For fiscal year 1998, the Corps requested $127 million for its fish mitigation program but received an appropriation of $95 million. Also, the Corps received an additional $14 million in fiscal year 1998 to fund operations and maintenance of its fish passage facilities and juvenile fish transportation operations. The Corps has estimated that the funding required to implement the fish mitigation program through the end of fiscal year 2007 will total about $1.4 billion. About $908 million of this total will be spent in fiscal year 1999 through the scheduled completion of the program in fiscal year 2007. The $908 million is for future construction of fish passage projects and related studies and does not include operations and maintenance costs for fish passage facilities. Since fish mitigation projects typically span more than one fiscal year, the Corps must seek funding for many projects during multiple appropriation cycles. Consequently, ongoing projects may be affected if the Corps receives a fish mitigation appropriation that is less than its budget request. In these cases, the Corps seeks recommendations from the Regional Forum to help the Corps make its decisions about which projects are funded, and at what level, for the year. Although the Corps initially receives funding for its fish mitigation activities through the congressional appropriation process, the Bonneville Power Administration is responsible for reimbursing the U.S. Treasury for the majority of these expenditures. Specifically, Bonneville Power repays the Treasury for the Corps’ fish mitigation expenditures at its dams in proportion to the hydropower share of each dam’s purposes, which also include navigation, irrigation, and flood control. While the hydropower share varies by dam, it averages about 80 percent. Bonneville Power collects the revenues necessary to repay these costs through its electricity rate structure. Concerns about Bonneville Power’s ability to continue funding rising fish and wildlife costs, including those associated with the Corps’ fish mitigation actions, led the agencies responsible for operating the Columbia power system (the Corps under the Department of the Army, Bonneville Power under the Department of Energy, and the Bureau of Reclamation under the Department of the Interior), as well as NMFS and the Fish and Wildlife Service, to negotiate a Memorandum of Agreement that limits Bonneville Power’s fish and wildlife funding responsibilities each year. This limit is independent of the amount the Corps will receive through annual congressional appropriations. According to Corps officials, the agency has yet to receive an appropriation that is as high as the amount established as Bonneville Power’s maximum contribution under the Memorandum of Agreement. Specifically, the agreement states that Bonneville Power will provide an average of $252 million annually for direct, reimbursable, and capital fish- and wildlife-related costs during fiscal years 1996-2001. The agreement allocates the $252 million as follows: $100 million for noncapital fish and wildlife program activities that Bonneville funds directly, such as research, predator control, hatcheries, and habitat restoration. These activities are called for in NMFS’ 1995 Biological Opinion and the Northwest Power Planning Council’s Fish and Wildlife Program. About $40 million for reimbursement payments to the Treasury for the operations and maintenance of fish passage and hatchery facilities and other noncapital expenditures. $112 million for capital investment repayments to the Treasury for such projects as constructing fish passage facilities at federal dams, including the Corps’ dams, and hatcheries. During these fiscal years, Bonneville Power also estimates forgone annual hydropower revenues of approximately $183 million that are associated with providing water for flow augmentation and spill. As such, under the agreement, Bonneville Power will provide an average of $435 million annually for fish- and wildlife-related costs during fiscal years 1996-2001. The agreement also recognized the Unites States’ trust obligation to Columbia River Basin Native American tribes and committed the federal signatory agencies to consult and cooperate with the tribes when planning and conducting fish and wildlife mitigation actions. It also recognized the Northwest Power Planning Council’s Fish and Wildlife Program and required the parties to discuss planned mitigation actions with the Council in an attempt to reach a common viewpoint. As of October 31, 1997, the Corps’ Columbia River Fish Mitigation program consisted of 58 actions, including those required by NMFS’ 1995 Biological Opinion. While the majority of the Corps’ fish mitigation actions have been or are expected to be completed on schedule and within budget, the Corps has encountered difficulties implementing many of its fish mitigation actions. About 40 percent of the 47 fish mitigation actions the Corps has initiated, including most of its larger projects, have experienced delays, cost increases, or both. A variety of factors, mostly in combination, have contributed to the Corps’ problems. Some of these factors, such as high water flows and floods, had an adverse effect on completing projects. In other cases, delays and cost increases have resulted from decisions by the Regional Forum that changed fish mitigation priorities. These changes were often necessitated by such factors as funding limitations, the need for additional biological data, or the desire to test new technology. While the Corps coordinates its fish mitigation actions with the Regional Forum, the overall effectiveness of the Forum has been questioned because, among other things, members do not agree on how to pursue salmon recovery efforts and do not uniformly support the actions required by the Biological Opinion. Differing goals are not conducive to implementing actions, especially when consensus is sought to make decisions. In addition, other difficulties, such as problems with engineering designs, were the result of the Corps’ by-passing standard procedures for project management in an effort to implement required actions in the time frames established by the Biological Opinion. In some cases, the problems the Corps has experienced in implementing its fish mitigation actions have had significant impacts. These include delaying the collection of data needed to make future decisions on salmon recovery, continued high fish mortality rates, the loss of power generation and related potential revenue, and increased operations and maintenance costs. The 1995 Biological Opinion identified various actions the Corps must implement to improve fish passage at its eight dams on the lower Columbia and Snake rivers. The Corps expanded its existing fish mitigation program to include these requirements. As of October 31, 1997, the fish mitigation program consisted of 58 fish mitigation actions that included 29 studies and 29 projects. The Corps’ evaluation and monitoring studies are designed to give the region better biological information and insights related to fish passage and survival at hydropower dams. Specific studies include, among other things, the effectiveness of fish guidance devices and surface collection prototypes and the feasibility of abating dissolved gas supersaturation. The 29 projects include such actions as designing and constructing extended-length submerged screens in front of turbine intakes to increase fish guidance efficiency, constructing additional barges for the juvenile fish transport program, constructing spillway flow deflectors to reduce gas supersaturation, and constructing new outfalls to reduce predation of juvenile fish at bypass system discharge points. (See app. IV of this report for a list of the Corps’ fish mitigation projects and studies and their status as of Oct. 31, 1997.) As of October 31, 1997, the Corps had started 47 of the 58 fish mitigation actions contained in its fish mitigation program. The remaining 11 actions had not yet been scheduled to start. The majority of the 47 actions have been, or are expected to be, completed on time and within budget. However, the Corps identified 19 actions (8 studies and 11 projects), or about 40 percent of the total actions the Corps has initiated, that were delayed, had encountered cost increases, or both. The actions include most of the Corps’ larger fish mitigation projects as measured in terms of estimated costs to complete. As of October 31, 1997, 18 of the 19 fish mitigation actions have been delayed. The delays ranged from 3 weeks in starting a study on the effectiveness of a prototype surface bypass and collection system at the Lower Granite Dam to an indefinite delay for installing a juvenile fish bypass system at The Dalles Dam. In addition to delays, 9 of the Corps’ 19 fish mitigation actions experienced cost increases (8 of the 9 actions incurred both delays and cost increases). As of October 31, 1997, cost increases on the 9 actions averaged over $2 million, ranging from $280,000 for the installation of extended-length submerged bar screens at Little Goose Dam to over $7 million for the design and construction of a new juvenile fish sampling and monitoring facility at John Day Dam. A variety of factors has contributed to delays and cost increases in 19 of the Corps’ fish mitigation actions. Some of the factors, such as changes in fish mitigation priorities, problems encountered in attempting to streamline project management, and the effects of adverse weather on project implementation, were identified as the reasons for delays and cost increases in more than one study or project. Other factors, such as problems with contractors’ performance, a contract bid protest, and revisions to project scope, were identified as reasons only in individual actions. In most actions, a combination of these factors were the reason for the Corps’ inability to complete fish mitigation actions on time and within budget. For at least four projects and three studies, the revision of fish mitigation priorities by the Regional Forum resulted in delays and/or cost increases. Most of these actions involved changing project priorities—changes that were necessitated by funding limitations, the need for additional biological information, or the desire to test new technology. An example of the Regional Forum’s changing project priorities because of funding limitations occurred at the Corps’ Bonneville Dam located on the lower Columbia River. The Biological Opinion specified that improvements to the existing juvenile fish bypass system at the dam’s second powerhouse should be completed by the spring of 1999. Survival studies conducted by the Corps in the late 1980s showed high juvenile fish mortality rates in the existing bypass system as well as downstream at the location of the system’s juvenile fish transportation release site. Improvements to be made to the existing bypass system included (1) a variety of measures to reduce juvenile fish delay and mortality in the fish collection channel; (2) relocation of the transportation flume to an area located approximately two miles downstream from the second powerhouse, which is a habitat less conducive to predators; and (3) construction of a monitoring facility near the relocated transportation flume outfall so that juvenile fish using the bypass system can be sampled and evaluated in order to gain information on the Columbia River system’s fish survival rate. According to Corps officials, completion of the juvenile fish monitoring facility will be delayed 1 year because of a shortage of funds. The Regional Forum reviewed the funding shortage and decided that the Corps should relocate the transportation flume and make improvements to the juvenile fish collection channel by March 1999 because these changes would have the most impact on improving juvenile fish survival at the second powerhouse. The Regional Forum also decided that the monitoring facility should be completed in the year 2000. According to Corps officials, the Corps constructed a temporary facility in 1997 to evaluate tracking tags placed in the migrating juvenile fish. However, the temporary facility will not provide as comprehensive a sample or evaluation of the juvenile fish as will occur once the permanent facility is in operation. Corps officials also noted that while funding limitations may adversely affect individual projects and studies, the region is attempting to provide its limited funds to those projects and studies that have the potential to provide the greatest benefit. An example of a delay that occurred because the Regional Forum decided to wait for additional biological information occurred at the Corps’ Lower Granite Dam on the Snake River. This dam has a juvenile fish bypass system and a juvenile fish holding and loading facility that were included when the dam was completed in 1975. The Biological Opinion stated that the Corps should improve this facility by widening the collection channel, replacing the existing 1,000-foot pipe that connects the collection channel with the downstream holding and loading facility and bypass outfall, improving the system’s capability to separate juvenile fish by size, and updating features at the holding and loading facility. In June 1996, the Corps’ Walla Walla District issued a feature design memorandum on the project that included descriptive criteria for modifying the existing facility. The project’s total cost, including design and construction, was estimated at almost $19 million. Work was to begin in 1997, and the upgraded facilities were scheduled to be fully operational by March 1999. However, after about $450,000 had been spent on this project, principally to prepare and publish the feature design memorandum, the Regional Forum recommended that no fiscal year 1998 funds should be committed to this project and that all work should be deferred, possibly until fiscal year 2000. According to the Corps, the decision to defer work was based on the pending 1999 decision on whether or not to draw down or breach the dams on the lower Snake River. Specifically, the expenditure of up to $19 million on the improvements could be negated if the drawdown option is selected for the Snake River dams. According to Corps biologists, delays in implementing the modifications to the Lower Granite juvenile fish bypass modifications forestall some interim benefits from new state-of-the-art design features; however, the existing bypass system has a less-than-1-percent direct mortality measure, and improvements over that rate are hard to quantify. An example of a project delay caused by the Regional Forum’s desire to test new technology occurred at The Dalles Dam located on the lower Columbia River. In appropriation legislation (Public Law 100-371) for fiscal year 1989, the Congress directed the Corps to design, test, and construct a juvenile fish bypass system for improving the efficiency of juvenile fish passage at The Dalles Dam. A juvenile fish bypass system was not originally installed when The Dalles Dam was completed in 1957. The dam’s turbines, spillway, and ice and trash sluiceway—a waterway used to pass ice and trash around the dam—have been used to bypass juvenile fish around the dam. The lack of an efficient bypass system resulted in significant mortality rates in juvenile fish. Specifically, juvenile fish that go through the turbines experience mortality rates estimated to be as great as 15 percent. In addition, preliminary results of the Corps’ ongoing spillway survival study indicate that the mortality rate for juvenile fish using the spillway—a rate the Corps had earlier assumed to be approximately 2 percent—may actually be as high as 12 percent. Likewise, observed hydraulic conditions in the ice and trash sluiceway and observed predator densities—such as excessive numbers of squawfish—at the sluiceway outfall have led the Corps to conclude that utilizing the existing ice and trash sluiceway to bypass juvenile fish may be unacceptable. In March 1994, the Corps issued a feature design memorandum providing for the design, construction, and operations and maintenance of a juvenile fish bypass system consisting of an extended-length submerged bar screen at The Dalles Dam. Construction was to have begun in October 1995, and the bypass system was to have been fully operational by March 1998 at a cost of more than $123 million. However, in November 1994, with approximately $20 million already invested, the Corps indefinitely deferred the project. The new bypass system was deferred because of intense congressional and Regional Forum interest in the feasibility and benefits of a new technology—a surface collection bypass system for juvenile fish. In addition, according to the Corps, it was assumed that in the interim, spilling juvenile fish over the dam’s spillway would be a suitable and effective means of fish passage when used in conjunction with the ice and trash sluiceway. The Corps, in response to the Regional Forum, was to start testing this new technology at The Dalles Dam either in conjunction with, or in place of, the bypass system consisting of an extended-length submerged bar screen. However, a lack of funding for studies of the effectiveness of the surface collection bypass prototype has delayed the decision on whether or not to construct the extended-length submerged bar screen system. The current plan is for the Corps to test surface collection bypass prototypes at The Dalles Dam in 2001 and 2002. However, the prototype tests have already been delayed 2 years because of the low priority assigned by the Regional Forum for funding the project, and no funds have been allocated for surface collection studies at the dam in 1998. As a result of the decision to indefinitely defer construction of an extended-length submerged bar screen system pending results of the Corps’ evaluation of the effectiveness of a prototype surface collection bypass system at The Dalles Dam, juvenile fish now attempting to pass the dam must still either go through the turbines, go over the spillway, or utilize the existing ice and trash sluiceway. Consequently, juvenile fish migrating down the river are still exposed to some of the same hydraulic conditions, predator densities, and mortality rates that the Corps found to be unacceptable in the mid-1980s. According to Corps officials, interim juvenile bypass measures, such as reducing the volume of water released over the spillway by more than 50 percent so that the mortality rate of juvenile fish going over the spillway may be reduced, are being considered for The Dalles Dam until a new bypass system is installed. There have been ongoing concerns about the effectiveness of the Regional Forum’s process. For example, the fiscal year 1996 Congressional Conference Committee for Energy and Water Resource Appropriations called for an independent evaluation of the management practices of the Corps, Bonneville Power, NMFS, and other federal and sovereign entities and their various programs for restoring salmon runs on the Columbia and Snake River systems in the western United States. The Corps’ Seattle District contracted with Science Applications International Corporation with support from HDR Engineering, Inc., to conduct this study. In a June 13, 1997, report, the study found a number of deficiencies with the Regional Forum’s process. First, the study found that the members of the Regional Forum do not share a common vision or goal for salmon recovery efforts. As a result, the actions required by the Biological Opinion are not uniformly supported. For example, through the Biological Opinion, NMFS has directed the implementation of structural and operational actions that may benefit listed salmon without removing dams. These actions are not uniformly supported by Regional Forum members as the most effective means of increasing fish survival. Several members of the Forum, primarily the Native American tribes with some concurrence by states, support drawdown to the natural river level as the most effective technique for listed species survival and recovery. The report states that differing goals are not conducive to implementing actions, especially when consensus is sought to make decisions. The study recommended that the Forum develop a single strategic recovery plan based on a consensus of its members. Second, the study found that the Regional Forum does not have a clearly defined process for making decisions on the implementation of fish passage projects when consensus is not possible. The report states that the net result is that minority views sometimes prevail and technical and policy decisions are not always made at the appropriate level within the Regional Forum. The study states that decisions should still be made by consensus, but not defined as a vote of 100 percent of the participants. The report recommends that consensus be defined as agreement that the parties can “all live with the decision and will not actively work to undermine it.” The study further pointed out that although a new definition of consensus and the development of a common vision through a strategic plan will assist in reaching agreements, it will not always ensure the agreement of all parties. The study further recommended the establishment of a clear process to resolve disputes. Finally, the study found that setting priorities for projects, studies, and other fish passage activities has been repetitive and often contradictory. Fish mitigation activities, particularly those with multiple-year schedules, are brought before the appropriate Regional Forum subcommittee each year when appropriations are sought. Each time, the opponent(s) of a project has an opportunity to delay or cancel it, even if several years’ investment has already occurred. The study recommended that project priorities and funding decisions be made at a specifically designated level in the Regional Forum. Furthermore, the report states that the priorities for projects should not be re-set unless new science would substantively alter an approach. The study team believes that these actions would reduce costs because projects that have started will not as likely be halted and/or have to be re-initiated. Responding to the criticisms directed at the overall effectiveness of the Regional Forum by many regional interests, in mid-1997, the Governors of Oregon, Washington, Idaho, and Montana called for the replacement of the federally led Regional Forum with one that would be jointly led by federal agencies, states, and Native American tribes. The proposed new panel has been referred to as the Three Sovereigns Forum. As of February 1998, a draft plan for the establishment of the new Forum was being developed by the three sovereign entities in anticipation of circulating it to the public for review. We found that problems the Corps has experienced during attempts to streamline its project management process resulted in delays and/or cost increases in two projects and one study. For example, when the Corps’ John Day Dam on the lower Columbia River was originally completed in 1971, it did not contain facilities for sampling and monitoring migrating juvenile fish. A sampling and monitoring facility was added to the dam in 1986. However, the Biological Opinion called for the installation of a new facility to improve the Corps’ ability to monitor juvenile salmon migrating downstream. The Biological Opinion directed that the project be completed no later than 1997. In 1992, an NMFS contractor had completed a report addressing the feasibility and basic design of an updated facility. In August 1994, a Corps architect-engineer contractor began detailed design of the project using the concept presented in the NMFS feasibility report. In October 1994, the Corps, its architect-engineer, and NMFS determined that the design developed in the NMFS feasibility report was not workable because resulting hydraulic conditions could be harmful to juvenile fish. The Corps then directed its contractor to develop alternative designs for a new facility. In September 1995, the contractor completed the feature design memorandum for the alternative chosen by the Corps. The feature design memorandum, which presented a significant redesign of the project, estimated that the new facility would be fully operational by April 1997. However, the Corps encountered additional difficulties during the construction phase of the project. For example, after the construction of the project foundations was under way, the contractor encountered subsurface conditions different from those specified in the contract drawings. The different subsurface conditions resulted in the Corps’ making changes in foundation designs, drilling procedures, and construction materials. The problems the Corps encountered during the design and construction of the new facility contributed to significant cost increases and project delays. The cost of the design contract increased from an initial award amount of about $755,000 to over $2.8 million. Work related to the redesign of the project after October 1994 accounted for about $407,000 of this increase. The cost of the construction contract increased from an initial award of about $16 million to a completion cost of over $21 million. The additional work the construction contractor performed because of differing site conditions accounts for the largest portion of the increase—about $3.8 million. This work also delayed the contract completion date by almost 4 months. Reasons for the remaining cost increases include design deficiencies, project features that were changed or added after construction started, and additional services the contractors were required to perform, such as planning and performing on-site facility testing. In an effort to meet the March 1997 operational date, the Corps completed the design phase for the new facility on an expedited basis. However, according to Corps officials, the Corps’ efforts to accelerate the normal design process contributed to cost increases and delays. For example, the Corps did not perform a formal technical review of the original NMFS feasibility report, as it would under normal procedures. Moreover, the Corps relied on geotechnical data collected in 1983 that did not accurately reflect subsurface structures and soil conditions in the project area. Finally, because the facility was not operational during the 1997 fish migration season, the Corps lost the ability to collect improved data on the juvenile fish migrating that year. According to Corps officials, the two projects and one study that encountered problems during unsuccessful attempts to streamline standard project management procedures were technically complex actions. They noted that problems can occur when accelerating the design of cutting-edge technology and that the main reason that procedures were bypassed or accelerated was to meet the time frames set forth in the Biological Opinion. The Corps also cited two examples of projects in which accelerating the design process was successful. Specifically, in these two projects—one involving the installation of flow deflectors at Ice Harbor Dam and the other the design of a surface bypass prototype at Lower Granite Dam—the Corps was able to complete the design phase on an expedited basis, thus saving substantial time. However, both of these projects were subsequently delayed for reasons unrelated to accelerating project design. Weather played a significant role in delaying and/or increasing the cost of at least three projects and one study. The Corps’ project to install flow deflectors at Ice Harbor Dam illustrates the impact that adverse weather can have on a project. In order to improve juvenile salmon passage, the Biological Opinion required the Corps to spill additional water over its eight dams during the fish migration season rather than passing those flows through turbines. The Corps also spills water on an involuntary basis when flows are high and exceed the powerhouse flow capacity at the dams. However, spilling river flows can cause the water below and downstream of the dams to become supersaturated with gases, such as nitrogen, normally found in the air. High levels of total dissolved gases can damage or kill salmon and are harmful to other aquatic organisms. Therefore, the Biological Opinion stated that the Corps should implement a gas abatement program at its dams. The program was to include structural modifications, such as the installation of flow deflectors at Ice Harbor Dam. The Corps awarded a construction contract for the Ice Harbor flow deflector project in July 1996 at a cost of over $2.7 million. It provided for the installation of deflectors on the dam’s eight center spill bays by March 1997. On December 30, 1996, the control room operator at Ice Harbor Dam advised the contractor that, because of unusually high river flows, the Corps would begin releasing water over the spillway. Accordingly, the contractor was advised to remove construction equipment from the spill basin. The Corps began spilling river flows the next day at a rate of about 20,000 cubic feet per second. Discharge over the spillway reached 100,000 cubic feet per second early in the morning of January 1, 1997. On February 6, 1997, after having installed four deflectors, the Corps and the contractor agreed that because of high river flows, the need to continue spilling at the dam, and the upcoming juvenile fish migration season, construction activities would be discontinued until September 1997. From September to November 1997, the contractor completed the remaining four deflectors and removed equipment from the construction site. However, the delay in project completion of about 7-1/2 months led to a significant cost increase. Specifically, the Corps agreed to pay the construction contractor about $895,000 for costs associated with the delay, including the cost of one additional construction mobilization and demobilization to complete the remaining flow deflectors and standby costs associated with keeping equipment available until construction could resume. According to Corps officials, they recognized and were concerned about the risks associated with performing this work in such a tight time frame in the winter. Therefore, they asked the Regional Forum for permission to begin this work in early August. However, the Regional Forum denied this request on the basis of their need to continue spill during the entire month of August, as provided for in the Biological Opinion. Because the contractor installed only four instead of the eight flow deflectors planned before demobilizing because of high river flows, the Corps did not achieve the full reduction on total dissolved gas in time for the 1997 juvenile salmon migration. The Corps projected that the installation of the remaining four deflectors would provide a further reduction in total dissolved gas levels of 3 percent to 5 percent. However, the Corps did not have sufficiently refined data to determine the survival gain that will result from this increment in total dissolved gas reduction. Even so, the additional reduction was expected to be biologically beneficial. When fish mitigation projects encounter delays and cost increases, the impacts can be significant. Specifically, the collection of data needed to make future decisions on salmon recovery can be delayed, high fish mortality rates can continue, there can be a loss of power generation and related potential revenues, and dam operations and maintenance costs can increase. In addition, with a fixed annual program budget, when one fish mitigation action incurs a cost increase, the opportunity to use those funds on other projects or studies is lost. Project delays can result in lost opportunities to collect biological data needed to make more informed regional decisions on such issues as what are the most effective ways to bypass juvenile fish. For example, in the 1980s, the Corps installed a juvenile fish bypass system consisting of submerged screens, collection channels, and outfall flumes on the Bonneville Dam. Subsequently, numerous Corps and NMFS fish passage studies identified significant problems with the bypass system. Among other things, the studies showed that the juvenile fish were using the bypass system less than 50 percent of the time. A goal of the Biological Opinion is to have at least 80 percent of the downriver migrating juvenile fish pass around each dam, including Bonneville Dam, either through a bypass system or over a spillway, and at least 95 percent of these bypassed juvenile fish are to survive. Recognizing that the existing Bonneville Dam bypass system could not meet this standard, Corps and NMFS fish biologists and engineers determined that the installation of a surface collection bypass system at Bonneville Dam could potentially assist in meeting the efficiency goals of juvenile fish guidance as specified in the Biological Opinion. In August 1995, the Corps’ prototype development program for surface collection bypass systems specified that installation of the prototypes at Bonneville Dam’s two powerhouses and spillway was to start in 1996. However, the start of the prototype installations at the first and second powerhouses has been delayed until 1998 and 2000, respectively, and the installation of the prototype at the spillway has been deferred indefinitely. According to the Corps, these delays and deferral occurred for a variety of reasons. Specifically: Installation of the bypass system prototype at the first powerhouse was delayed because (1) model testing had not been performed to assess the hydraulic conditions within the area, (2) a detailed biological study plan for testing the prototype had not been completed, (3) the potential location of the prototype in relationship to the turbines had not been modeled and completed, and (4) there was a lack of regional support because hydraulic conditions within the prototype had not been completely modeled. Installation of the bypass system at the second powerhouse was delayed because the Regional Forum made the recommendation to limit funds at Bonneville Dam in order to implement juvenile fish bypass projects at the Corps’ seven other dams on the lower Columbia and Snake rivers. After coordinating with the Regional Forum, the Corps deferred indefinitely the bypass system prototype at the Bonneville Dam spillway because the results of recent biological tests suggested that juvenile fish approaching the spillway pass the dam with minimal delay or injury. Furthermore, according to the Corps, the Regional Forum’s low funding priority for surface collection bypass studies in 1998 has already delayed the completion of surface collection prototype studies at the dam’s first powerhouse until 2001. As a result, a major decision on which bypass concept to pursue at the first powerhouse may be based, in part, on the results of limited studies of surface collection prototypes. According to the Corps, the amount of information available on surface bypass efficiency, balanced by the cost of additional prototypes and the likelihood of success, as well as the improved guidance efficiency obtained from the extended-length screen tests, will be considered before implementation decisions are reached. In the interim, juvenile fish attempting to pass Bonneville Dam must rely on existing juvenile bypass systems that are successful less than 50 percent of the time. The Corps’ fish passage efficiency studies showed that Ice Harbor Dam’s bypass system, utilizing the dam’s ice and trash sluiceway, provided for the passage of only about 35 to 50 percent of the juvenile fish migrating downriver. In an effort to improve fish passage efficiency, in December 1990, the Corps proposed to construct a high-flow juvenile fish bypass system at Ice Harbor Dam that would include submerged screens to guide juvenile fish away from the dam’s turbines, a fish collection channel, and a transportation channel to pass fish around the dam and release them back into the Snake River. The proposed bypass system was approved by federal and state fish agencies (the Regional Forum did not exist yet), including NMFS, as well as by affected Native American tribes. The system was to be completed by February 1994. In June 1992, the fish agencies and tribes expressed two major concerns about the approved high-flow system. First, there was a significant area of shallow water—prime predator habitat— downstream from the juvenile fish bypass release site. Second, the speed of the water in the high-flow bypass flume would not allow for the sampling of all juvenile fish bypassing the dam. As a result of these concerns, the Corps redesigned the bypass system from a high-flow to a low-flow system and extended the length of the bypass flume to the downriver side of the shallow water area. According to Corps officials, the need to redesign the bypass system resulted in a 2-year delay in the planned construction completion date. In addition, according to the Corps, the 2-year delay could have had a significant negative impact on the juvenile fish that attempted to bypass Ice Harbor Dam because they may have gone either through the dam’s turbines or over the dam’s spillway, where they could have experienced mortality rates of 15 percent and 2 percent, respectively. However, another Corps official pointed out that impacts associated with the delay were at least partially offset by the installation of submerged traveling screens in 1993 under a separate contract. In addition, this official said the delay resulted in a better outfall flume in terms of design and discharge location, providing juvenile fish with survival benefits that exceeded the impacts associated with the 2-year delay. Problems with completing fish mitigation projects can also lead to a loss of potential power generation and the associated potential revenues. Early evaluation of the juvenile fish bypass system at the Corps’ dams, including the McNary Dam on the lower Columbia River, revealed the need for refinements to improve fish guidance efficiency. For example, the McNary Dam studies indicated that the existing 20-foot bar screen guidance system in front of the turbines directed only about 40 percent of the fall chinook salmon away from the dam’s turbines and into the bypass collection channel. As a result, in March 1994, after years of study and testing, the Corps recommended the installation of new extended-length (40-foot) screens to optimize fish guidance. The Corps planned to install the new screens by December 1996. In addition, the Biological Opinion called for the completion of this project in time for the spring 1997 juvenile chinook salmon migration. In response to the Biological Opinion, the Corps accelerated its design and contracting process to meet the implementation date. In March 1995, the Corps entered into a contract for the construction and installation of 42 extended-length submerged bar screens (one for each of the three gatewells over each of the dam’s 14 turbines); all screens were to be in place and fully operational by December 27, 1996. However, shortly after the installation of the first batch of new screens, dam operations personnel found frequent problems with the brush arm control—the device used to control the extent of movement by the brush arm as it removes debris from the screen. Fixing the problem required the operators to take the turbine off line and raise the screen in the gatewell to reset the control limit switch—a half-day operation. In response to the problems and increased maintenance costs, the installation of the remaining screens was delayed until the design problem was fixed. In May 1996, a new design utilizing different technology was adopted for controlling the sweep arm. Project personnel replaced the original control devices, began installing the remaining 30 screens, and completed the installation of the screens in March 1997—3 months later than originally planned. According to Corps officials, problems with the sweep control device were experienced during prototype testing and a new, untested design was proposed for the contract. However, the pressure to meet the Biological Opinion’s completion date required expedited contracting procedures to finalize design drawings for the contract solicitation package which left no time for additional testing. A major impact stemming from the failure of the sweep control device was the loss of power generating capacity during the spring 1996 salmon migration season. Project personnel reported that there were 2,422 hours of forced turbine outage at McNary in 1996 directly attributable to problems with the sweep control devices. At the Bonneville Power Administration’s estimated revenue of $2,000 per generating hour, the outage equates to about $5 million in potential lost power revenue in 1996. A Corps official noted, however, that this amount of potential lost revenue would only be realized if the powerhouse was operating at capacity—which seldom occurs. As such, the official believed the potential lost revenue was likely to be much less than $5 million. The inability to complete fish mitigation projects can also result in an increase in dams’ operations and maintenance costs. For example, in 1995, the Corps awarded a contract for the construction and installation of extended-length submerged bar screens at the Little Goose Dam located on the lower Snake River. As was the case at McNary Dam, the Corps encountered numerous problems with the new screens, and completion of the project was delayed about 11 months. One of the major problems with the Little Goose extended-length screens was that steel plates, perforated with holes to ensure uniform water flow through each screen, failed because of broken high-tension bolts. The broken bolts, which allowed perforated plates to fall off some of the screens, forced the Corps to remove each of the 18 screens from the river for repair. Consequently, the Corps’ operations and maintenance costs were increased by about $24,000. In addition, according to Bonneville Power, hydroelectric power production at Little Goose Dam was reduced because the turbines behind the removed screens had to be taken out of operation until the screens were repaired and replaced. This resulted in lost power revenues of about $745,000 to Bonneville Power. The extended-length screen bolt problem is being investigated by the Corps, and the results of the analysis should be available by December 1998. In the interim, the Corps is monitoring the screens and periodically removing them from the river to ensure that the perforated plates remain in place and to replace bolts that break. This monitoring effort, however, continues to reduce hydroelectric power production and power revenues at the dam and increases the Corps’ operations and maintenance costs. Of the 19 fish mitigation actions we reviewed, 9 had cost increases that totaled over $20 million. Since the Corps’ fish mitigation program receives an annual appropriation, when one fish mitigation action incurs a cost increase, the opportunity to use those funds on other projects may be lost. In addition, the Corps may have to revise the scope or implementation schedules for certain projects or studies. For example, the Biological Opinion requires the Corps to conduct a feasibility study of ways to improve the migration of juvenile salmon through its lower Snake River dams. The study focuses on three alternatives: existing condition, drawdown of the dams, and system improvements that could be accomplished without a drawdown. Because of changes in the scope of this study, primarily expanding the analysis of the social and economic impacts of the alternatives being considered, the Corps incurred a cost increase of about $4 million. As a result, the Corps reduced the scope of other study components such as water quality analyses. Moreover, since the overall study will now consume a larger portion of the total funding available to the fish mitigation program, the Corps, in conjunction with the Regional Forum, made adjustments in the funding of other lower priority fish mitigation actions. For example, funding for the Corps’ study of potential improvements to auxiliary water supply systems for adult fish ladders at Snake River dams was reduced. While the majority of the Corps’ fish mitigation actions have been or are expected to be on schedule and within budget, the Corps has encountered difficulties implementing many of its fish mitigation projects. Projects have encountered delays and cost increases because of adverse weather conditions, such as high river flows and flooding. Furthermore, the Corps’ agreement to work cooperatively with regional interests through the Regional Forum has, on occasion, subjected it to changing fish mitigation priorities, including which projects or studies are to be funded, when they are to be funded, and at what funding level. However, the effectiveness of the Regional Forum has been questioned because, among other things, members do not agree on how to pursue salmon recovery efforts and do not uniformly support the actions required by the Biological Opinion. Differing goals are not conducive to implementing fish mitigation actions, especially when consensus is sought to make decisions. In addition, some delays and cost increases have been caused by the Corps’ unsuccessful attempts to streamline its project management process in order to meet deadlines imposed by the Biological Opinion. In these cases, there appears to be a trade-off. According to the Corps, by accelerating the design phase of some projects, it completed this phase expeditiously. However, efforts to streamline the management of other projects cost the Corps both time and money and negatively affected the Corps’ ability to safely bypass juvenile fish around its eight dams on the lower Columbia and Snake rivers.
Pursuant to a congressional request, GAO reviewed the: (1) Army Corps of Engineers' decisionmaking process for identifying, setting priorities for, and funding actions to help the recovery of salmon runs in the Columbia River Basin; and (2) difficulties in implementing these actions. GAO noted that: (1) since 1995, the Corps' efforts to mitigate the decline of salmon stocks on the lower Columbia and Snake rivers have been guided by the National Marine Fisheries Service's 1995 Biological Opinion; (2) many of the monitoring, evaluation, research, design, and construction projects identified in the Biological Opinion are included in the Corps' Columbia River Fish Mitigation program; (3) the Corps' decisionmaking process for selecting, setting priorities for, and funding specific projects and studies in its fish mitigation program is a cooperative effort between the Corps and regional interests and is known as the Regional Forum process; (4) the Regional Forum is a group with a broad regional representation, including federal agencies, states, and Native American Tribes located in the Columbia River Basin; (5) the Forum, which includes the Corps, tries to reach consensus among its members in making decisions about fish mitigation actions; (6) if consensus cannot be reached, the Corps is the decisionmaker on actions that affect its eight dams; (7) annually, the Corps, with input from the Regional Forum, estimates the costs of its fish mitigation actions and requests funding as a part of its normal budget process; (8) if Congress appropriates less funding than the Corps requests, the Corps seeks recommendations from the Regional Forum to help it decide on which actions should be funded; (9) the majority of Corps fish mitigation actions are being completed on time and within budget; however the Corps identified 19 actions that were delayed, experienced cost increases, or both; (10) in at least four projects and three studies, delays and cost increases were the result of decisions by the Regional Forum that changed fish mitigation priorities; (11) these changes were often necessitated by such factors as limited funding, the need for additional biological data, or the desire to test new technology; (12) in three projects, difficulties, including problems with engineering designs, were the result of the Corps' bypassing standard procedures for managing the project in an effort to implement required actions in the timeframes established by the Biological Opinion; (13) the problems the Corps has experienced in implementing its fish mitigation actions have had significant impacts; and (14) there are ongoing concerns about the overall effectiveness of the Regional Forum because, among other things, its members do not agree on how to pursue salmon recovery efforts.
Nonagricultural pesticides encompass a wide range of products—including home and garden insecticides and fungicides, sterilants, insect repellents, and household cleaning agents—and the potential for exposure is significant. The effects of exposure on humans depend on the characteristics of the pesticide, dosage, duration of the exposure (usually through inhalation, skin contact, or ingestion), and physiological reaction of the person affected. Some people suffer no effects; others experience symptoms ranging from relatively mild headaches, skin rashes, eye irritation, and general flu-like symptoms to more serious chemical burns, paralysis, and even death. Chronic and delayed-onset illnesses such as cancer may only appear years after repeated exposure to small doses of a pesticide. Under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA), EPA is responsible for ensuring that pesticides, when properly used, do not have any unreasonable adverse effects on the environment (any unreasonable risk to man or the environment, taking into account the economic, social, and environmental costs and benefits of the use of any pesticide). The act authorizes EPA to register pesticide products, specify the terms and conditions of their use before they are marketed, and remove unreasonably hazardous pesticides from the marketplace. Thus, registrations are basically licenses for specified uses of pesticide products. The act also requires that EPA reassess and reregister thousands of older pesticide products on the basis of current scientific standards. The process requires the pesticides’ registrants to complete studies of various health and environmental effects, which are then reviewed by EPA to determine whether the products can be reregistered and thus remain on the market. Section 6(a)(2) of FIFRA also requires that registrants of pesticides report to EPA any additional factual information that they may obtain about unreasonable adverse effects that their registered pesticides have on the environment. According to EPA, the additional information on adverse effects that the registrants must report includes toxicology studies, human epidemiological and exposure studies, and efficacy studies, as well as incidents of pesticide exposure. In addition, the act requires that EPA monitor, among other things, the extent to which humans, animals, and the environment are incidentally exposed to pesticides, trends over time, and the sources of contamination. According to EPA, the data on incidents of pesticide exposure often augment the extensive studies performed by registrants as part of reregistration. This review focused on the data on incidents of exposure reported to EPA. When EPA identifies risks during its review of data on incidents, the agency may initiate one or more actions. These actions include restricting pesticide uses by placing specific instructions for use on the product’s label (for example, requiring protective equipment), canceling specific uses of the pesticide, and/or canceling the pesticide’s registration, thus removing the pesticide from the marketplace. From 1978 through 1981, EPA coordinated and collected information on incidents of pesticide exposure through its Pesticide Incident Monitoring System. The system’s reports originated from registrants and from sources such as state and local agencies, poison control centers, health clinics, and hospitals that provide this information voluntarily. After this system was eliminated because of funding cuts, EPA continued to receive reports of incidents involving pesticides from registrants and from the voluntary sources. However, the agency did not have an automated system for monitoring data on such incidents until 1992, when it developed the Incident Data System to organize and track data originating from both pesticide registrants and the voluntary sources. This system stores data on incidents involving humans, domestic animals, wildlife (fish, birds, and mammals), and groundwater and surface water. Although most—about 87 percent, according to an Office of Pesticide Programs official—of the reports on incidents in EPA’s system come from registrants, EPA also receives supplementary data from voluntary sources. FIFRA does not require states or sources other than registrants to collect or submit data on exposures. However, some states have established mandatory reporting regulations specifically for pesticide-related illnesses. EPA currently receives data on incidents routinely from five of these states—either directly or indirectly. California and Washington voluntarily send annual summary reports to EPA directly, while the agency receives quarterly reports on incidents in New York, Oregon, and Texas from the National Institute of Occupational Safety and Health, which collects data from these states. According to an EPA health statistician, other states may report some data on incidents to EPA, but not routinely. Written reports on incidents are forwarded to a single location at EPA headquarters, where they are cataloged and screened to determine whether they warrant detailed attention and/or consideration in registration or reregistration reviews. Aggregate reports are periodically generated from the data entered into the computerized system to determine if patterns are emerging that could cause concern. However, EPA has a backlog of data to be entered into the Incident Data System, thus limiting the effective use of the data it receives. Although the agency currently has a number of people involved in collecting and analyzing data on pesticide incidents, only a portion of each individual’s work time is spent dealing with incidents, and no one has been assigned full-time to data collection efforts such as entering data into the system. Since the system became operational in June 1992, EPA has received about 12,575 reports. While about 8,125 of the reports had been entered into the system as of April 1995, information on about 3,250 incidents had not yet been entered because of limited staff resources. Another estimated 1,200 reports, which the registrants say contain confidential information, will not be entered into the system until the agency determines the validity of these claims. According to EPA staff, data on incidents of exposure played a significant part in 19 instances in which the agency took measures to protect the public health between 1989 and 1994. For example, after analyzing data from emergency rooms, hospitals, and poison control centers, the agency determined that most uses of arsenical (arsenic-based) ant baits could no longer be used in homes because of the potential high risk to children. In another instance, EPA, after reviewing cases involving the deaths of two individuals who died when they entered structures treated with methyl bromide, required that the product’s label be revised to extend the period before people are allowed to reenter a treated area. In a third case, EPA determined that many reports of adverse reactions to pet care products likely resulted from misuse of the product or accidental exposure. Specifically, it appeared that some animals and humans had reacted adversely as a result of overdoses or repeated applications at too frequent intervals, or simultaneous applications of multiple pesticide products to pets and their environment. In several incidents, cats were injured by pet care products intended for dogs only. In this case, the aggregate number of incidents and other data in the Incident Data System on all pet care products led EPA to draft a proposed Pesticide Regulation Notice. EPA intends for the proposed notice to provide registrants of pesticide products with instructions on how a product’s label should be changed to reflect the proper intervals for repeated use of the product and to restrict the use of the product to animals for which it was specifically intended. At the time of our review, the proposed notice had not been finalized. (App. I lists other examples of actions that EPA has taken using data on incidents involving nonagricultural pesticides.) Although EPA has been able to take some actions using data on incidents of exposure, the data the agency receives may not always be sufficient and its ability to assess risk and take action based on such data may be limited. The reports on incidents that EPA receives from registrants, as well as some of the voluntary reports such as those received from states, often vary in detail and lack key information needed to assess risk. For example, the reports frequently lack information on what pesticide caused the incident, how the exposure occurred, and what symptoms the victim suffered. EPA believes this type of information is essential in assessing risks and thus determining whether the label on a product should be changed or its use restricted or cancelled. Also, EPA cannot be sure that the reports it receives from registrants and voluntary sources are representative of incidents of exposure occurring nationwide. In addition, according to experts involved in these issues, underreporting of such incidents is widespread because, among other things, health care professionals may not always be adequately trained to recognize pesticide poisoning. Although pesticide registrants are required to report to EPA any additional factual information on the unreasonable adverse effects of their registered pesticides, their incident reports vary in detail. Section 6(a)(2) of FIFRA, which requires the registrants to report to EPA, does not require specific information, and EPA does not require standardized formats. An official in EPA’s Office of Pesticide Programs said that registrants interpret FIFRA’s reporting requirements in a variety of ways. Also, some registrants report frequently, while others do not. In reviewing recent reports received by EPA, we found that some registrants do not always include important information such as whether the product was misused or how frequently the victim was exposed to a pesticide. For example, one registrant submitted several reports that identified the pesticide involved and described the symptoms suffered but did not mention whether the product was used according to the label’s instructions or whether the victim was exposed to the pesticide once or repeatedly. EPA believes some reports may lack important data simply because the data was unavailable to the registrants, while other reports may exclude data due to registrant interpretation of reporting requirements. The data that the states provide to EPA voluntarily also frequently lack important information, such as whether the product was misused, whether the victim was repeatedly exposed to the pesticide, what symptoms the victim suffered, how the exposure occurred, and—in some cases—what pesticide caused the incident. Information on laboratory tests, which would help confirm the exposure and health effects, is seldom present. In reviewing some of the data received by EPA, we found that although two states, in their 1994 quarterly reports, summarized the number of pesticide-related incidents, they did not provide detailed information about the exposures. One state reported 11 occupational (work-related) pesticide poisonings for the quarter, of which 3 were confirmed (that is, cause and effect had been determined), but did not disclose the names of the pesticides involved or other details of the exposures. Another state’s quarterly report summarized several incidents of occupational pesticide poisonings in that state but revealed the name of only one pesticide. The report indicated that state agencies were further investigating some incidents to determine what action should be taken. Although EPA believes that any information about pesticide exposures can be useful, without some of the significant details about an incident of exposure EPA is unable to identify trends or patterns among pesticides that cause problems, assess their potential risks, or take corrective action. When the information EPA receives from the registrants, as well as voluntary sources such as states, does not have much of the data needed for assessing risk, it is of limited use. In this connection, officials in the Office of Pesticide Programs emphasized that FIFRA does not mandate that the states have mechanisms for collecting data on incidents and does not require states to report incidents to EPA. The officials also said that although EPA receives some data from states, the agency does not depend on the states for reports of incidents. Reports on incidents of exposure that EPA receives from registrants and from voluntary sources may not be representative of incidents occurring nationwide. For example, the nation’s poison control centers typically receive far more reports of exposure than EPA does. These centers recorded over 150,000 incidents of humans being exposed to pesticides in 1992-93. In contrast, about 12,575 incidents of humans and animals being exposed to pesticides have been reported to EPA since 1992. EPA has sometimes used data from a data base maintained by the American Association of Poison Control Centers, but the agency has generally not had funds to routinely pay the fees for such data. The association’s data base contains considerable amounts of data on individual exposures, including the type of substance or product, age of the patient, means of exposure, symptoms, and type of treatment—if any—and the medical outcome. While the association publishes summary data annually in the September issue of the American Journal of Emergency Medicine, it charges a fee for detailed data. For example, exposure data on a single poison for 1990-93 would cost $4,400. Abstracts of individual case records, when available, are priced at $150. As an alternative to purchasing these data directly, however, EPA can require registrants to purchase the data when the agency determines that a pesticide poses a high risk to public health. In 1993, for example, EPA’s Acute Worker Risk Strategy Work Group identified 28 chemicals as acutely toxic to agricultural workers—based on data from California, data on toxicity, and data on usage. In this case, EPA issued a data call-in noticerequiring the pesticides’ registrants to submit data from the American Association of Poison Control Centers. Using data from California and from the poison control centers, EPA’s worker risk group has proposed measures to reduce risk for aldicarb, azinphos-methyl, carbofuran, methamidophos, and methomyl pesticides. Apart from pesticide registrants, FIFRA does not give EPA authority to require individuals, states, or organizations to report exposure to or incidents involving pesticides to EPA. The voluntary nature of the data collection system is a major contributor to underreporting of incidents. However, underreporting also results from a lack of training within the medical community in recognizing pesticide poisonings and lack of familiarity with state reporting requirements. In our 1993 report on agricultural pesticides, we reported that state officials cited underreporting as a serious problem because, among other reasons, health care professionals lacked adequate training in recognizing and diagnosing pesticide-related illnesses and were unfamiliar with state reporting requirements and/or unwilling to report cases to state officials. State and federal officials indicated that even when reports were made, it was frequently difficult to verify incidents and determine their cause because of delays in reporting and a lack of information about the circumstances of these illnesses. While these reasons were cited for agricultural pesticides and farm workers, the same appears to be true for nonagricultural pesticides and consumers. For example, an EPA Health Statistician told us that he believed the medical community’s incomplete understanding or recognition of pesticide poisonings was one reason why the data that EPA collected on incidents were not sufficient in helping the agency take the necessary action. With respect to health care professionals’ familiarity with state reporting requirements, a toxicologist at the University of California at Berkeley reported that physicians in California—the state with the most comprehensive registry of pesticide-related illnesses in the nation—are often not aware that such illnesses must be reported to the appropriate local health officers. According to the report he coauthored, Preventing Pesticide-related Illness in California Agriculture, one-quarter of physicians surveyed in rural California did not know that suspected and confirmed pesticide-related illnesses must be reported to county health officers. EPA has recognized that its approach to data collection needs improvement, and in September 1994, its Office of Pesticide Programs established a work group to focus on potential improvements. This work group was established to develop a long-term plan for collecting, storing, manipulating, and using data on incidents. EPA recently completed the first phase of this effort, in which the work group identified the (1) critical and desirable data elements, (2) use and potential use of the data collected, (3) current and potential sources of data, and (4) gaps between the data EPA needs and the data it already has. A second phase—to identify potential improvements in data collection and analysis—will include identifying (1) how much different system configurations would cost, (2) who should have access to these systems, (3) whether one or more data collection systems are needed, (4) how the agency should be structured internally for the data collection system, and (5) who should operate the system. Further efforts by the work group will include exploring the potential for more routinely requiring registrants to purchase data from the poison control centers as part of specific projects. A December 1994 report by the work group indicated that additional phases may also be undertaken. Although the work group coordinator said the group plans to establish deadlines for the second phase, as of May 1995 EPA did not have a formal plan with milestones for completing any of the phases for this group’s work or for implementing any improvements the work group identified. EPA has also proposed a new rule, which it calls the 6(a)(2) rule, aimed at improving the quality of the data on incidents the agency receives from pesticide registrants and making the processing of this information easier for the registrants and the agency. Although registrants are required under FIFRA to submit any factual data on adverse effects they may have, EPA is concerned that incidents may be underreported by the industry as a whole. The currently available guidance on reporting on incidents, developed in the 1970s, is not very detailed. On the basis of the proposed rule, registrants will be given specific regulatory requirements on what data they must report to EPA on incidents of exposure, when such data are available. For example, the specific information being requested in the proposed rule includes the name of the company submitting the information to EPA, the EPA registration (or identification) number of the pesticide involved, and a detailed summary including specific information about the incident being reported. EPA believes its new rule will clarify the registrants’ responsibilities and should result in significantly greater numbers of reports on incidents. EPA expects the new rule to be finalized in 1995. In addition, officials from the Office of Pesticide Programs said that the office is considering a major reorganization as part of an effort to streamline operations and that options for managing information on incidents will be considered as part of this effort. Furthermore, EPA staff have been working with four companies that submit large numbers of reports on incidents of exposure to determine the feasibility of electronic submission of reports. Officials in the Office of Pesticide Programs believe that if the registrants put the data in a format compatible with the data in the agency’s Incident Data System, staff will be able to enter these data directly into the system. The officials also said that they plan to ask these companies to consider electronically resubmitting reports they had previously submitted on paper. Eliminating the need to manually key these data into the system could help reduce most of the backlog. EPA believes this effort is a cost-effective method of improving its handling of incidents of exposure. While EPA has a system for collecting, reviewing, and acting on incidents of exposure to pesticides and has taken action on some data on incidents, the system does not currently ensure that EPA always has sufficient information to determine whether action to protect public health is necessary. Although EPA has been able to take some actions using its data on incidents, the agency may not be appropriately responding to all cases of adverse health effects caused by pesticide use. Better, more complete data on incidents involving pesticides would help EPA determine whether additional actions are necessary to protect public health. EPA has already begun to take some steps to improve its collection and analysis of data, and its work group is continuing to identify additional areas for improvements. We support the agency’s efforts because they should lead to better management of data on incidents. Similarly, EPA’s proposed 6(a)(2) rule should lead to an improvement in the quality of data submitted by registrants. We requested comments on a draft of this report from EPA. On June 12, 1995, we met with a section head, Policy and Special Projects Staff, Office of Pesticide Programs, to obtain the agency’s comments on the draft report. During this meeting, we were provided with comments from the Director, Office of Pesticide Programs. EPA believes our report accurately explains that EPA regards data on incidents of exposure as an important supplement to laboratory studies, and is seeking ways to improve the quality and quantity of the data submitted to the agency, as well as for improved ways of managing and using the data in making regulatory decisions. EPA believes the draft report did not clearly state the importance of its proposed 6(a)(2) rule, which is to accomplish two significant objectives. First, the rule will explain to registrants exactly what facts EPA wants them to report. Secondly, the rule is intended to solve the perceived problem of underreporting by registrants due to lack of clear guidance in the form of an enforceable regulation. The agency pointed out that the proposed rule does not place new or additional requirements on registrants, but only clarifies what is already required under FIFRA. We agree that the rule is important for improving the quality of data on incidents. EPA was also concerned that in a period of serious resource constraints, it will be very difficult to make all the improvements to its collection of data on incidents that would be desirable. As noted in our report, acquiring adequately detailed information from nonregistrant sources can cost substantial amounts of money. EPA believes that managing increased numbers of reports will require the investment of scarce funds and personnel in data management systems. In its comments, EPA said that although electronic data submission and other reporting innovations may help to achieve economies, some improvements may not be possible at all if resources are cut significantly in the future. EPA also provided some technical comments, and we have made changes in appropriate sections of our report to accommodate these comments. Our objectives were to determine whether EPA collects data on incidents of exposure to pesticides and takes action based on these data, and whether such data are sufficient to allow the agency to determine if unacceptable risks to public health are occurring. To accomplish these objectives, we interviewed officials from EPA’s Office of Pesticide Programs, including the Chief, Special Projects and Coordination; Incident Data Officer for Humans and Domestic Animals; Coordinator, Ecological Incident Monitoring; Chief, Certification and Training Branch; and Section Head of Special Review and Groundwater. We also reviewed documents and records from EPA’s Incident Data System. To obtain views on incidents of pesticide exposure from others outside of EPA, we discussed the adverse health effects of nonagricultural pesticides with representatives of industry and of environmental and other nonprofit organizations. In addition, we visited California, Florida, and Oregon, and collected and reviewed these states’ data on incidents of exposure. We selected these states because they collect data on such incidents and because two of these states—California and Florida—have climates in which a greater use of nonagricultural pesticides is likely to be required. We conducted our review between March 1994 and May 1995 in accordance with generally accepted government auditing standards. As arranged with your office, we plan no further distribution of this report until 10 days after the date of this letter unless you publicly announce its contents earlier. We will then send copies to the Administrator of EPA. We will also make copies available to others on request. Please call me at (202) 512-4907 if you or your staff have any questions. Major contributors to this report are listed in appendix II. While EPA does not routinely receive complete data on incidents involving nonagricultural pesticides, it sometimes receives information on specific cases that is detailed enough to assist it in taking actions to protect public health. Table I.1 lists examples of EPA’s use of such data to take actions between 1989 and 1994. Data collected, used, and/or analyzed by EPA EPA reviewed data from hospitals’ emergency rooms, newspaper clippings generated by manufacturers, and field information from state agencies to identify the types and severity of poisonings that could result from the use of chlorine in swimming pools. EPA restricted the use of chlorine in swimming pools. Through an increase in the number of incidents reported by the National Pesticide Telecommunications Network,EPA identified a public perception of risk from lawn care pesticides. EPA developed guidance for the states on how to establish posting and notification programs for lawn care products. Through its Incident Data System, EPA identified a large number of pets being adversely affected by consumers’ misuse of these products. The data also revealed that human health was being adversely affected. EPA has completed a Pesticide Registration Notice instructing registrants to clarify warnings and instructions on the products’ labels to prevent misuse by consumers. Using information collected from EPA’s regional offices and from state agencies, EPA found cases in which certain insect repellents were causing adverse reactions. EPA distributed a physician’s advisory through the Centers for Disease Control and poison centers as well as a consumer brochure on proper use. On the basis of (1) reports on a child with acrodynia, (2) over 40 publications on the relationship between that disease and mercury, and (3) levels of mercury that the Centers for Disease Control found in household air and occupants’ urine in Detroit homes, EPA assessed the risk of acrodynia resulting from the use of mercury in household paint. EPA canceled all uses of mercury in household paints. EPA used data from hospitals’ emergency rooms, hospitals, a poison control center, and the state of Texas to determine that this pesticide product had a small margin of safety for young children. EPA canceled most uses of sodium arsenate in household ant bait. (continued) Data collected, used, and/or analyzed by EPA A parent informed EPA of an incident involving a child who overcame a child-resistant package containing 2 percent disulfoton powder (a pesticide used on ornamental plants and house plants). EPA required the manufacturer to retest the product’s child-resistant packaging for efficacy. EPA learned of an investigation of two cases (one in California and one in Iowa) in which two people died after reentering structures treated with methyl bromide. EPA required revisions to the pesticide’s label requiring longer ventilation periods before people reentered treated structures. Data reviewed by a poison control center permitted EPA to determine how much boric acid powder or how many tablets resulted in poisonings of children. EPA required revisions to the product’s label to restrict the number of tablets used in one application of the product. Mercury was added to paints to preserve the paint in the can by controlling the growth of microbes, principally bacteria, and to preserve the paint from mildew attack after it was applied to an exterior surface. Lawrence J. Dyckman, Associate Director J. Kevin Donohue, Assistant Director Raymond M. Ridgeway, Evaluator-In-Charge Jennifer W. Clayborne, Evaluator Phyllis Turner, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Environmental Protection Agency's (EPA) monitoring of human exposures to pesticides, focusing on whether EPA: (1) collects data on exposure arising from the use on nonagricultural pesticides; (2) takes action in response to potential health risks from such exposure; and (3) receives sufficient information to assess whether unacceptable risks are occurring. GAO found that: (1) EPA has collected pesticide exposure data from pesticide registrants and public and private entities since the 1970s and, in 1992, it implemented a computerized system to organize and track such data; (2) EPA has not assigned full-time staff to data collection and processing; therefore, the system has a data entry backlog, which limits its effectiveness; (3) EPA acted in 19 instances between 1989 and 1994 to protect the public from pesticide risks; (4) EPA often cannot assess whether a pesticide poses an unacceptable health risk, since incident reports frequently lack key data, may not be representative, or are not submitted; (5) an EPA work group is developing a long-term plan to collect and manage exposure data, but it has yet to develop a plan for putting the most cost-effective improvements into effect; (6) to improve the number and quality of exposure reports, EPA has proposed a rule that requires pesticide registrants to submit more detailed data on exposure incidents and clarifies the registrants' responsibilities; (7) EPA is determining the feasibility of having registrants who submit large numbers of reports to submit them electronically; and (8) the current exposure monitoring system includes data on both agricultural and nonagricultural pesticides, since EPA collects and processes the same information for those chemicals.
There are three types of dialysis, which is a process that removes excess fluids and toxins from the bloodstream: (1) hemodialysis performed in a facility (referred to as in-center hemodialysis in this report); (2) hemodialysis performed at home; and (3) peritoneal dialysis, which is generally performed at home. In-center hemodialysis is the most common type of dialysis and was used by about 89 percent of dialysis patients in 2012; the remaining patients received either peritoneal dialysis (9 percent) or home hemodialysis (2 percent). Similarly, almost all— approximately 96 percent of—dialysis facilities had in-center hemodialysis patients in 2012; just over two-fifths of facilities had peritoneal dialysis patients and nearly one-fifth had home hemodialysis patients. The processes for hemodialysis—performed either in a facility or at home—and peritoneal dialysis differ. (See fig. 1.) For in-center hemodialysis treatments, blood flows from the patient’s body through a surgically created vein or a catheter, known as a vascular access site, and through tubing to the dialysis machine. The machine pumps the blood through an artificial kidney, called a dialyzer, to cleanse the excess fluids and toxins from the bloodstream and then returns the cleansed blood to the body. Patients typically receive in-center hemodialysis for 3 to 4 hours three times per week. For home hemodialysis treatments, the process is the same, but the patient performs the treatments and may perform treatments more frequently and at night. For peritoneal dialysis treatments, a catheter is used to fill the patient’s abdomen with a dialysis solution that collects excess fluids and toxins over several hours; those excess fluids and toxins are removed from the body when the patient drains the dialysis solution from the abdomen. To conduct the exchanges—draining and then refilling the abdomen with the dialysis solution—most peritoneal dialysis patients use a machine that performs several exchanges during the night while they are asleep, and other patients do manual exchanges during the day. The three types of dialysis are also associated with various clinical advantages and disadvantages. For example, some studies have suggested that more frequent use of home hemodialysis can achieve better health outcomes for certain patients such as those with hypertension. In another example, some studies have suggested that peritoneal dialysis may have a lower risk for death in the first few years of dialysis therapy, and peritoneal dialysis can also help patients retain residual kidney function. However, the causes of some differences in clinical outcomes between the types of dialysis can be challenging to determine because of differences in patient characteristics; younger patients, for example, were more likely to receive peritoneal dialysis than other types, according to USRDS data. In addition, there may also be clinical disadvantages. For example, home hemodialysis patients’ more frequent use of the vascular access site may result in a higher risk for complications such as damage to the site that requires repair. Additionally, peritoneal dialysis patients may develop peritonitis, an infection of the peritoneal membrane, and the peritoneal membrane may become less effective over time, meaning a patient may eventually have to switch to either home or in-center hemodialysis. Patients’ preferences may influence whether patients receive home dialysis (either peritoneal dialysis or home hemodialysis) or in-center hemodialysis. For example, some patients may prefer home dialysis because they do not need to travel to the facility three times per week, giving them greater flexibility to work during the day and undergo dialysis at night in their home. Some patients also may prefer home dialysis because there may be fewer diet and fluid restrictions and less recovery time following each dialysis treatment. On the other hand, successfully performing home dialysis requires patients to undergo training and assume other responsibilities that they would not otherwise have if they dialyzed in a facility. As a result, patients who feel unprepared to accept such responsibilities or who lack a spouse or caregiver to help them may be less likely to choose home dialysis. For similar reasons, some experts and stakeholders have indicated that switching from in-center to home dialysis can be challenging once patients become accustomed to in- center hemodialysis. Furthermore, the greater complexity of home hemodialysis training—including learning to place needles in the vascular access site and how to respond to alarms from the dialysis machine— relative to peritoneal dialysis training could lead some patients to prefer one type of home dialysis over the other. In addition to patients’ preferences, clinical factors may affect whether patients receive home dialysis or in-center hemodialysis. One factor is whether a patient has received care from a nephrologist prior to beginning dialysis. Patients who did not receive such care and who have an urgent need to start dialysis often do so with in-center hemodialysis because training is not required and because a venous catheter can be placed and used immediately. More lead time can be required for peritoneal dialysis to allow the site where the peritoneal dialysis catheter was placed to heal. As another example, a patient with poor vision or dexterity may have difficulty performing the tasks associated with home dialysis. In addition, a patient who has received multiple abdominal surgeries may not be an appropriate candidate for peritoneal dialysis. Finally, patients with multiple comorbidities (i.e., multiple chronic diseases or disorders) may choose in-center hemodialysis because it can allow the nephrologist to more closely manage those other conditions. Medicare uses different methods to pay (1) dialysis facilities for providing dialysis treatments to patients and for training them to perform home dialysis and (2) physicians for managing patients’ dialysis care and educating them about their condition. For dialysis treatments—including any training that occurs in the first 4 months of treatment—Medicare has paid facilities a single bundled payment per treatment since 2011. The bundled payment is designed to cover the average costs incurred by an efficient facility to provide the dialysis, injectable drugs, laboratory tests, and other ESRD-related items and services. In 2015, Medicare paid a base rate of $239.43 per treatment for up to three hemodialysis treatments per week, and Medicare sets the daily rate for peritoneal dialysis such that payments for 7 days of peritoneal dialysis would equal the sum of payments for three hemodialysis treatments. Medicare adjusts the base rate to account for certain factors that affect the cost of a treatment, including costs to stabilize patients and to provide training during the first 4 months of dialysis treatments, as well as certain other patient and facility factors. CMS implemented its Quality Incentive Program beginning in 2012, which can reduce Medicare payments for dialysis treatments to facilities by up to 2 percent based on the quality of care they provided. When training occurs after the first 4 months of the patient’s dialysis treatments, Medicare pays dialysis facilities the bundled payment plus an additional fixed amount (often referred to as the training add-on). The training add-on is for the facilities’ additional staff time to train the patient. This training, which can happen in an individual or group setting, is required to be furnished by a registered nurse. The number of treatments that include home dialysis training—called training treatments—varies by type of dialysis and by patient. Medicare currently pays facilities a training add-on amount of $50.16 per treatment for up to 25 home hemodialysis training treatments or a daily equivalent rate for up to 15 days of peritoneal dialysis training; CMS increased the training add- on payment from $33.44 to $50.16 in 2014. Medicare pays physicians (typically nephrologists) a monthly amount per patient to manage patients’ dialysis care. This monthly amount covers dialysis-related management services such as establishing the frequency of and reviewing dialysis sessions, interpretation of tests, and visits with patients. To receive the payment, Medicare requires the physician to provide at least one face-to-face visit per month to each patient for examining the patient’s vascular access site. The monthly amount paid to the physician for managing in-center patients varies on the basis of the patient’s age and the number of visits provided to the patient, but the amount for managing the care of a home patient varies only on the basis of the patient’s age and not the number of visits. Besides the monthly payment for patients’ dialysis care, Medicare provides a one-time payment to physicians of up to $500 for each patient who completes home dialysis training under the physician’s supervision; this payment is separate from Medicare’s payments to facilities for training patients. Medicare also pays physicians to provide kidney disease education to patients who have not yet started dialysis. Congress established the Kidney Disease Education (KDE) benefit as part of the Medicare Improvements for Patients and Providers Act of 2008 to provide predialysis education to Medicare patients with Stage IV chronic kidney disease. Topics to be covered include the choice of therapy (such as in- center hemodialysis, home dialysis, or kidney transplant) and the management of comorbidities, which can help delay the need for dialysis. Historical trends in the overall percentage of all dialysis patients on home dialysis—including both Medicare and non-Medicare patients—show a general decrease between 1988 and 2008 and a more recent increase thereafter through 2012. According to USRDS data, 16 percent of 104,200 dialysis patients received home dialysis in 1988. Home dialysis use generally decreased over the next 20 years, reaching 9 percent in 2008, and then slightly increased to 11 percent of 450,600 dialysis patients in 2012—the most recent year of data available from USRDS. (See fig. 2.) More generally, the percentage of all patients on home dialysis declined from 1988 through 2012 because the number of these patients increased at a slower rate than the total number of all patients on dialysis. During the time period from 1988 through 2012, most home dialysis patients received peritoneal dialysis as opposed to home hemodialysis. The more recent increase in use of home dialysis is also reflected in CMS data for adult Medicare dialysis patients, showing an increase from 8 percent using home dialysis in January 2010 to about 10 percent as of March 2015. Literature we reviewed and stakeholders we interviewed suggested several factors that may have contributed to the trends in home dialysis use from 1988 through 2012. Looking at the initial decline between 1988 and 2008, contributing factors may have included increased capacity to provide in-center hemodialysis and changes in the dialysis population. Increased capacity to provide in-center hemodialysis. The growth in facilities’ capacity to provide in-center hemodialysis from 1988 to 2008 outpaced the growth in the dialysis patient population over the same time period. Specifically, the number of dialysis stations, which include the treatment areas and dialysis machines used to provide in-center hemodialysis, increased at an average annual rate of 7.3 percent during this time period, while the number of patients increased at an average annual rate of 6.8 percent. As a result, dialysis facilities may have had a greater financial incentive to treat patients in facilities in an effort to use this expanded capacity, according to literature we reviewed. Changes in the dialysis population. The increased age and prevalence of comorbidities in the dialysis population may have reduced the portion considered clinically appropriate for home dialysis. Dialysis patients who are older and those with comorbid conditions may be less physically able to dialyze at home. From 1988 to 2008, the mean age of a dialysis patient rose from 52.2 years to 58.6 years. Similarly, the proportion of the dialysis population affected by various comorbid conditions increased during this time period. For example, the percentage of dialysis patients with diabetes as the primary cause of ESRD increased from 24.6 percent in 1988 to 43.1 percent in 2008. Medicare payment methods and concerns about the effectiveness of peritoneal dialysis may have played a role in the decline in home dialysis use between 1988 and 2008, but changes in both factors may have also contributed to recent increases in use. Medicare payment methods for injectable drugs. Medicare payment methods prior to 2011 may have given facilities a financial incentive to provide in-center rather than home dialysis. Before 2011, Medicare paid separately for injectable drugs rather than including them in the bundled payment. As a result, Medicare payments to facilities for dialysis care—including the payments for injectable drugs—could have been lower for home patients because of their lower use, on average, of injectable drugs. However, the payment changes in 2011 reduced the incentive to provide in-center hemodialysis relative to home dialysis because the Medicare payment for dialysis treatments and related services, such as injectable drugs, no longer differed based on the type of dialysis received by the patient. Concerns about effectiveness of peritoneal dialysis. Several studies published in the mid-1990s indicated poorer outcomes for peritoneal dialysis compared to hemodialysis, and these studies may have made some physicians reluctant to prescribe peritoneal dialysis, according to stakeholders and literature we reviewed. However, stakeholders identified more recent studies indicating that outcomes for peritoneal dialysis are comparable to hemodialysis. These newer studies may have contributed to the recent increases in home dialysis use by mitigating concerns about the effectiveness of peritoneal dialysis and by making physicians more comfortable with prescribing it. Estimates from dialysis experts and other stakeholders suggest that further increases in the use of home dialysis are possible over the long term. The home dialysis experts and stakeholders we interviewed indicated that home dialysis could be clinically appropriate for at least half of patients. However, the percentage of patients who could realistically be expected to dialyze at home is lower because of other factors such as patient preferences. For example, at a meeting in 2013, the chief medical officers of 14 dialysis facility chains jointly estimated that a realistic target for home dialysis would be 25 percent of dialysis patients. To achieve this target, they said that changes, such as increased patient education and changes to payment policies, would need to occur. As another example, physician stakeholders we interviewed estimated that 15 to 25 percent of dialysis patients could realistically be on home dialysis. In the short term, however, an ongoing shortage of peritoneal dialysis solution has reduced the use of home dialysis, and this shortage could have a long-term impact as well. Medicare claims data analyzed by CMS show that the percentage of Medicare dialysis patients on home dialysis had reached 10.7 percent in August 2014, when the shortage was first announced, but has since declined to 10.3 percent, as of March 2015. CMS officials attributed this decline to the shortage in the supply of peritoneal dialysis solution because the decline did not occur among facilities owned by one large dialysis facility chain that manufactures its own peritoneal dialysis solution and has not experienced a shortage. Some dialysis facility chains told us that, because of this shortage, they limited the number of new patients on peritoneal dialysis. In addition, one physician association stated that the shortage could have long-term implications. They said that some physicians are reluctant to prescribe this type of dialysis, even when a facility has the capacity to start a patient on peritoneal dialysis, because of uncertainties about peritoneal dialysis supplies. Medicare payments to dialysis facilities, including those that provided home dialysis, gave them an overall financial incentive to provide dialysis, as shown by their generally positive Medicare margins. The average Medicare margin for all 3,891 freestanding facilities in our analysis was 4.0 percent in 2012—that is, Medicare payments exceeded Medicare allowable costs for dialysis treatments by 4.0 percent. Similarly, the average Medicare margin for the 1,569 freestanding facilities that provided one or both types of home dialysis was 4.20 percent in 2012. (See table 1.) Focusing on those facilities that provided home dialysis, nearly all (94 percent) provided both in-center and one or both types of home dialysis. In addition, although margins were positive, on average, for these facilities, we found that the Medicare margin for large facilities (7.21 percent) was considerably higher, on average, than for small facilities (-3.49 percent). We also found that most of the patient years (81 percent) were devoted to in-center hemodialysis, followed by peritoneal dialysis (15 percent) and home hemodialysis (4 percent). Small and large facilities followed the same pattern. In addition to giving an incentive to provide dialysis in general, Medicare payments to facilities likely encourage the use of peritoneal dialysis—the predominant type of home dialysis—over the long term. The payment rate for peritoneal dialysis is the same as the rate for hemodialysis provided in facilities or at home, but the cost of providing peritoneal dialysis is generally lower, according to CMS and stakeholders we interviewed. When CMS established the current payment system, it stated that its decision to have a single payment rate regardless of the type of dialysis would give facilities a powerful financial incentive to encourage the use of home dialysis, when appropriate. Another financial incentive that exists for both peritoneal dialysis and home hemodialysis is that facilities can receive additional months of payments for patients under 65 who undergo home dialysis training. Specifically, for patients under age 65, Medicare coverage typically begins in the fourth month after the patient begins dialysis, but coverage begins earlier if the patient undergoes home dialysis training. This incentive is augmented because payments to facilities are significantly higher during the first 4 months of dialysis. These incentives to provide home dialysis, compared to in-center hemodialysis, are consistent with CMS’s goal of fostering patient independence through greater use of home dialysis among patients for whom it is appropriate. Although over the long term facilities may have a financial incentive to encourage the use of one or both types of home dialysis, the impact of this incentive could be limited in the short term. This is because, in the short term, we found that expanding the provision of in-center hemodialysis at a facility generally tends to increase that facility’s Medicare margin and that the estimated increase is more than would result if the facility instead expanded the provision of either type of home dialysis. In particular, we found that, on average, facilities that provided home dialysis could improve their financial position in the short term by increasing their provision of in-center hemodialysis. An additional patient year of in-center hemodialysis improved the margin by an estimated 0.15 percentage points—for example, from 4.20 to 4.35 percent. (See fig. 3.) In contrast, increasing home dialysis resulted in a smaller benefit. Adding a patient year of peritoneal dialysis improved the margin by an estimated 0.08 percentage points and adding a patient year of home hemodialysis had no statistically significant effect on the margin; the estimated 0.04 percentage point reduction on average in the margin was not statistically different from zero. The pattern of the results in figure 3 for the three types of dialysis was similar for small and large facilities. (See results in app. I.) Our findings on the relative impact of the incentives in the short term are generally consistent with information on the cost of each type of dialysis provided to us by CMS and stakeholders we interviewed. First, consistent with our finding that facilities have a greater short-term incentive for in- center hemodialysis, stakeholders we interviewed said that facilities’ costs for increasing their provision of in-center hemodialysis may be lower than for either type of home dialysis. For example, although the average cost of an in-center hemodialysis treatment is typically higher than the average cost of a peritoneal dialysis treatment, facilities may be able to add an in- center patient without incurring the cost of an additional dialysis machine because each machine can be used by six to eight patients. In contrast, when adding a home patient, facilities generally incur costs for additional equipment, which is dedicated to a single patient. Second, some stakeholders said that the cost of providing home hemodialysis, in particular, can be higher than other types of dialysis in part because home hemodialysis patients often receive more than three treatments per week and Medicare’s policy is not to pay for these additional treatments unless medically justified. Finally, when comparing the two types of home dialysis, CMS and the stakeholders generally reported that the cost of home hemodialysis, including training, was higher than for peritoneal dialysis. They said that home hemodialysis training is more costly because of the greater complexity such as learning to place needles in the vascular access site and to respond to alarms. Stakeholders also told us that Medicare payments cover only a portion of the upfront costs for training a patient, particularly one on home hemodialysis. CMS increased the training add-on payment beginning in 2014 in response to public comments it received on the cost of home hemodialysis training, but the agency lacks reliable data for determining whether the revised payment is adequate. Specifically, CMS lacks reliable data on the cost of home dialysis treatment and training and on the staff time needed to provide training. We found that the cost report data on facilities’ costs for each type of dialysis, including costs for home dialysis training, were not sufficiently reliable. Although we determined that data on facilities’ total costs across all types of dialysis were sufficiently reliable for purposes of our analysis, stakeholders reported that these total costs were not accurately allocated to each type of dialysis and to training. One reason for this inaccuracy may be that some facilities allocated certain types of costs, such as dialysis-related drugs and supplies, based on the number of treatments for each type of dialysis. Representatives of these facilities reported that CMS’s Medicare Administrative Contractors had approved this allocation method. However, the number of treatments by type of dialysis may not be a reliable basis for allocating such costs. For example, studies have shown that utilization of dialysis-related drugs differs by type of dialysis, and stakeholders reported that supply costs can as well. In addition, CMS officials told us that they do not regularly review the reliability of these data. We also found that CMS lacks consistent data on the staff time required to provide home dialysis training even though the agency used the number of hours of nursing time as the basis for its training add-on payment rate. For example, in 2012, CMS acknowledged that 1 hour did not necessarily correspond to the amount of time needed to train a patient, even though CMS used 1 hour as the basis. More recently, despite the fact that CMS increased the training add-on by basing it on 1.5 hours of nursing time, CMS said that the public comments it received did not provide consistent information on the number of hours spent on training; the number of hours reported in these comments varied from 2 to 6 hours per treatment. The adequacy of training payments could affect facilities’ incentives for providing home dialysis, but it is unclear whether these payments are adequate given CMS’s lack of reliable data on the cost of training and by type of dialysis. Reliable cost report data are important for CMS to be able to perform effective fiscal management of the program, which involves assessing the adequacy of payment rates. In particular, if the training payments are inadequate, facilities may be less willing to provide home dialysis, which could undermine CMS’s goal of encouraging the use of home dialysis when appropriate. Medicare physician payments for dialysis care do not consistently result in incentives for physicians to prescribe home dialysis. In addition, few Medicare patients have used Medicare’s KDE benefit, and this low usage may be due to statutory payment limitations on the types of providers permitted to furnish the benefit and on the Medicare patients eligible to receive it. Finally, physicians’ limited exposure to home dialysis during nephrology training programs is a third factor that may constrain the extent to which physicians prescribe home dialysis. We found that the structure of Medicare’s monthly physician payments— one of several factors that could affect the use of home dialysis—may give physicians a disincentive for prescribing home dialysis, which could undermine CMS’s goal of encouraging the use of home dialysis when appropriate. CMS, when it established the current method of paying physicians a monthly payment to manage patients’ dialysis, stated that this method would encourage the use of home dialysis by giving physicians an incentive to manage home patients. According to CMS, this incentive would exist because the monthly payment rate for managing the dialysis care of home patients, which requires a single in- person visit, was approximately equal to the rate for managing and providing two to three visits to in-center patients. However, we found that, in 2013, the rate of $237 for managing home patients was lower than the average payment of $266 and maximum payment of $282 for managing in-center patients. (See table 2.) This difference in payment rates may discourage physicians from prescribing home dialysis. Physician associations and other physicians we interviewed told us that Medicare payments may give physicians a disincentive for prescribing home dialysis. They stated that, even though the payment levels for managing home patients are typically lower, the visits with home patients are often longer and more comprehensive; this is in part because physicians may conduct visits with individual home patients in a private setting, but they may be able to more easily visit multiple in-center patients on a single day as they receive dialysis. The physician associations we interviewed also said that they may spend a similar amount of time outside of visits to manage the care of home patients and that they are required to provide at least one visit per month to perform a complete assessment of the patient. In addition, while physicians can receive a higher payment for providing more than one visit to in-center patients, these additional visits may be provided by nurse practitioners and certain other nonphysician practitioners, who may be less costly. CMS has not revised the overall structure for paying for physicians to manage dialysis patients’ care since 2004, although it has addressed some stakeholder concerns such as how it paid physicians when home patients were in the hospital. In contrast to the monthly payments, Medicare physician payments related to patients’ training may provide physicians with financial incentives for prescribing home dialysis. For certain patients who start home training—those under 65 who are eligible for Medicare solely due to ESRD—the monthly payments to physicians can begin in the first month rather than the fourth month of treatment, which may provide physicians with an incentive to prescribe home dialysis. In addition, Medicare makes a one-time payment of up to $500 for each patient who has completed home dialysis training under the physician’s supervision. One stakeholder told us that this training payment may provide an incentive for physicians to prescribe home dialysis. Few Medicare patients have used the KDE benefit, which covers the choice of therapy (such as in-center hemodialysis, home dialysis, or kidney transplant) and the management of comorbidities, and stakeholders generally told us this low usage was related to payment limitations on the types of providers who are permitted to furnish the benefit and on the Medicare patients eligible to receive it. According to USRDS, less than 2 percent of eligible Medicare patients used the KDE benefit in 2010 and 2011—the first two years it was available—and use of the benefit has decreased since then. When CMS implemented the KDE benefit, the agency identified specific categories of providers—physicians, physician’s assistants, nurse practitioners, and clinical nurse specialists—as eligible to receive payment for furnishing the benefit. Stakeholders, including physician associations, told us that other categories of trained healthcare providers (such as registered nurses, social workers, and dieticians who may be part of the nephrology practice) are also qualified to provide predialysis education. However, when asked if other types of providers could be eligible to receive payment, CMS officials said that the statute specified the categories of providers and that the agency was limited to those providers. Dialysis facilities are also not eligible to receive payment for the KDE benefit. Although facility representatives said that they were equipped to provide education to these patients, including education on the choice of type of dialysis, CMS and some other stakeholders said that one reason facilities are not eligible to provide the KDE benefit is their financial interest in treatment decisions. For example, the KDE benefit is designed to provide objective education to patients on steps that can be taken to delay the need for dialysis and on the choice of therapies, which includes kidney transplant, as well as home dialysis and in-center hemodialysis. Some of these options could be contrary to dialysis facilities’ financial interest. Similarly, CMS identified a specific category of patients—those with Stage IV chronic kidney disease—as eligible to receive the KDE benefit. Physician stakeholders said that certain other categories of patients, such as those in Stage III or those in Stage V but who have not started dialysis, may also benefit from Medicare coverage of timely predialysis education. However, when asked if other categories of patients could be eligible to receive the KDE benefit, CMS officials said that the agency was limited by statute to Stage IV patients. The low usage of the KDE benefit, which may be a result of these payment limitations, suggests that it may be difficult for Medicare patients to receive this education, which is designed to help them make informed treatment decisions. Literature and stakeholders have underscored the value of predialysis education to help patients make informed treatment decisions, and also indicated that patients who receive it may be more likely to choose home dialysis. Literature we reviewed and nearly all of the stakeholders we interviewed indicated that physicians have limited exposure to home dialysis during nephrology training programs and thus may not feel comfortable prescribing it. One study found that 56 percent of physicians who completed training said they felt well trained and competent in the care of peritoneal dialysis patients, and 16 percent felt this way in the care of home hemodialysis patients. Furthermore, another study found that physicians who felt more prepared to care for peritoneal dialysis patients were more likely to prescribe it. Literature we reviewed and stakeholders identified two main factors that may limit physicians’ exposure to home dialysis while they undergo nephrology training: The nephrology board certification examination administered by the American Board of Internal Medicine does not emphasize home dialysis, particularly home hemodialysis. The examination blueprint published by the board shows that approximately 9 percent of the board certification examination is dedicated to questions regarding ESRD, which may include hemodialysis and peritoneal dialysis but, according to one board official, is unlikely to include home hemodialysis. Literature and stakeholders suggested that greater emphasis on home dialysis on certification examinations might lead to a greater emphasis on home dialysis in nephrology training. According to an Institute of Medicine report, the way Medicare provides graduate medical education payments may discourage nephrology training outside of the hospital, and one stakeholder said this system may impede physician exposure to home patients. Medicare pays teaching hospitals directly to help cover the costs of graduate medical education, including the salaries of the physicians in training. Hospitals have the option to allow physicians to train at a second, off-site location—for example, a dialysis facility with a robust home dialysis program—if the hospital continues to pay the physicians’ salaries. However, the stakeholder said that hospitals may be reluctant to allow physicians to train at a second, off-site location, such as a dialysis facility, because patients at such locations may not be served primarily by the hospital. The American Society of Nephrology has acknowledged that nephrology training in home dialysis needs to improve. As a result, the society has developed and disseminated guidelines identifying training specific to home dialysis and providing suggestions on curriculum organization to increase physician exposure to home patients. For example, the guidelines suggest physicians in training should demonstrate knowledge of the infectious and noninfectious complications specific to peritoneal dialysis and home hemodialysis. They also suggest a program’s curriculum should include observation of and participation in a patient’s training to conduct home dialysis. The number and percentage of patients choosing to dialyze at home have increased in recent years, and our interviews with home dialysis experts and stakeholders indicated potential for future growth. To realize this potential, it is important for the incentives associated with Medicare payments to facilities and physicians to be consistent with CMS’s goal of encouraging the use of home dialysis among patients for whom it is appropriate. One aspect of payment policy—training add-on payments to facilities—has a direct impact on facilities’ incentives for providing home dialysis. However, whether these training payments are adequate continues to be unclear because CMS lacks reliable data on the cost of home dialysis treatment and training for assessing payment adequacy. If training payments are inadequate, facilities may be less willing to provide home dialysis. In addition, the way Medicare pays physicians to manage the care of dialysis patients may be discouraging physicians from prescribing home dialysis. Finally, the limited use of the KDE benefit suggests that it may be difficult for Medicare patients to receive this education, which is designed to help them make informed decisions related to their ESRD treatment, including decisions on the choice of the type of dialysis, as well as options such as kidney transplant and steps to delay the need for dialysis. To determine the extent to which Medicare payments are aligned with costs for specific types of dialysis treatment and training, the Administrator of CMS should take steps to improve the reliability of the cost report data for treatment and training associated with specific types of dialysis. The Administrator of CMS should examine Medicare policies for monthly payments to physicians to manage the care of dialysis patients and revise them if necessary to ensure that these policies are consistent with CMS’s goal of encouraging the use of home dialysis among patients for whom it is appropriate. To ensure that patients with chronic kidney disease receive objective and timely education related to this condition, the Administrator of CMS should examine the Kidney Disease Education benefit and, if appropriate, seek legislation to revise the categories of providers and patients eligible for the benefit. We received written comments on our draft report from the Department of Health and Human Services (HHS). These comments are reprinted in appendix II. Because Medicare payments for home dialysis have implications for patients and the dialysis industry, we also obtained comments on our draft from groups representing home dialysis patients, large and small dialysis facility chains and independent facilities, and nephrologists. Following is our summary of and response to comments from HHS and these patient and industry groups. In written comments on a draft of this report, HHS reiterated its goal of fostering patient independence through greater use of home dialysis among patients for whom it is appropriate and pointed out that home dialysis use has increased since 2011 when the bundled payment system was implemented. HHS concurred with two of our three recommendations. In response to our first recommendation that CMS improve the reliability of cost report data for training and treatment associated with specific types of dialysis, HHS said that it is willing to consider reasonable modifications to the cost report that could improve the reliability of cost report data. HHS also stated that it was conducting audits of cost reports as required by the Protecting Access to Medicare Act of 2014. HHS also concurred with our second recommendation to examine Medicare policies for monthly payments to physicians to manage patients’ dialysis to ensure that these policies are consistent with CMS’s goal of encouraging home dialysis use when appropriate. HHS said that it would review these services through CMS’s misvalued code initiative, which involves identifying and evaluating physician services that may not be valued appropriately for Medicare payment purposes and then adjusting Medicare payment as needed. We believe that this examination and any resulting revisions to these payment policies have the potential to address our recommendation. HHS did not concur with our third recommendation that CMS examine the KDE benefit and, if appropriate, seek legislation to revise the categories of providers and patients eligible for the benefit. HHS said that CMS works continuously to appropriately pay for ESRD services and must prioritize its activities to improve care for dialysis patients. While we acknowledge the need for HHS to prioritize its activities to improve dialysis care, it is important for HHS to help ensure that Medicare patients with chronic kidney disease understand their condition, how to manage it, and the implications of the various treatment options available, particularly given the central role of patient choice in dialysis care. The limited use of the KDE benefit suggests that it may be difficult for Medicare patients to receive this education and underscores the need for CMS to examine and potentially revise the benefit. We received comments from five groups: (1) Home Dialyzors United (HDU), which represents home dialysis patients; (2) the National Renal Administrators Association (NRAA), which represents small dialysis facility chains and independent facilities; (3) DaVita, which is one of the two large dialysis facility chains; (4) Fresenius, which is the other large dialysis facility chain; and (5) the Renal Physicians Association (RPA), which represents nephrologists. The groups expressed appreciation for the opportunity to review the draft, and the three groups that commented on the quality of the overall report stated that it accurately addressed issues related to the use of home dialysis. Three of the groups commented on some or all of our recommendations, while the remaining two groups did not comment specifically on this aspect of our report. Specifically, HDU, NRAA, and RPA agreed with our first recommendation that CMS improve the reliability of cost report data for treatment and training associated with specific types of dialysis. A fourth group—Fresenius—expressed concern about the reliability of data on the costs of home dialysis, which was consistent with our recommendation that CMS needs to improve the reliability of these data. RPA, in addition to agreeing with this recommendation, questioned the reliability of the data on total facility costs that we used for our analysis. Although it was beyond the scope of our report to verify the accuracy of each facility’s cost report, we took several steps to assess the cost report data that we analyzed. These steps included verifying the cost report data for internal consistency and checking the number of dialysis treatments reported against Medicare claims. The fact that implementing these steps caused us to exclude some facilities’ data from our analysis suggests that the potential exists to improve the accuracy of these data. CMS’s implementation of our recommendation and auditing of cost reports under the Protecting Access to Medicare Act of 2014 create the opportunity for CMS to begin addressing this issue. NRAA, another group that agreed with our first recommendation, recommended that we or CMS develop mechanisms in addition to the cost reports to more accurately capture the resources devoted to providing home dialysis to each patient, but developing such mechanisms was beyond the scope of this report. One group (HDU) agreed with our second recommendation that CMS examine and, if necessary, revise Medicare payment policies for physicians to manage the care of dialysis patients, but a second group (RPA) urged us to reconsider the recommendation out of concern that implementing it could lead to cuts in physician payments for home dialysis. While RPA agreed that the current payment method gives physicians a disincentive for prescribing home dialysis, the group emphasized that it was only one of numerous factors that affect this treatment decision. RPA also stated that it would support certain payment changes that would increase physicians’ incentives to prescribe home dialysis, which could include using performance measures to promote home dialysis use. However, RPA expressed concern that the process CMS may use for examining and potentially revising this payment method could lead to cuts in physician payments for home dialysis, which RPA asserted would further discourage its use and be contrary to the intent of our recommendation. We agree that Medicare’s current method of paying physicians to manage patients’ dialysis care is one of several factors that could influence physicians’ decisions to prescribe home dialysis and described these factors in our report. In addition, while we do not know what changes, if any, CMS will make to physician payments for managing patients’ dialysis care, we believe the intent of our recommendation—to ensure that these payments are consistent with CMS’s goal to encourage the use of home dialysis when appropriate—is clear. Three groups (HDU, NRAA, and RPA) agreed with our third recommendation that CMS examine the KDE benefit and if appropriate seek revisions to the categories of providers and patients eligible for the benefit. RPA also emphasized its agreement with our findings that the statutory limitations on the providers and patients eligible for the benefit have contributed to the limited use of the benefit. These groups also urged other changes to the KDE benefit such as removing the requirement for a copayment and making documentation requirements more flexible. The limitations in the categories of eligible providers and patients were cited in our interviews with stakeholders as the main reasons for the limited use of the KDE benefit, but we acknowledge that other opportunities may exist for improving the benefit’s design. NRAA also pointed out that facilities currently educate patients with chronic kidney disease on the choice of type of dialysis but are not reimbursed by Medicare for doing so. We stated in the report that, according to the large and small dialysis facility chains we interviewed, they have the capacity to educate such patients about their condition. However, we also reported the concern raised by CMS and certain other stakeholders that the education provided by facilities may not be objective because they have a financial interest in patients’ treatment decisions. The patient and industry groups also made several comments in addition to those described above. DaVita, NRAA, and RPA stated that the use of telehealth by physicians to manage the care of dialysis patients could facilitate the use of home dialysis. We noted in the report that certain visits for managing in-center patients can be provided via telehealth. CMS has established a process for identifying other services—such as managing home patients—that could be provided via telehealth under Medicare, and examining this process was beyond the scope of this report. HDU, NRAA, and RPA stressed the importance of patient-centered dialysis care and of ensuring that patients have sufficient information to make informed decisions on the type of dialysis. We agree that patient preferences and patient education are central to decisions regarding the type of dialysis and have described these and other factors that could affect these decisions. DaVita and RPA stressed the impact of the ongoing shortage of peritoneal dialysis solution. In particular, DaVita said the shortage is the biggest barrier to the use of home dialysis. We agree that this shortage could have a long-term impact on the use of home dialysis and revised the report to incorporate this perspective. DaVita and HDU asserted that Medicare’s method of paying for dialysis care separately from other services, such as inpatient care, could affect incentives for providing home dialysis. For example, DaVita suggested that the incentive to provide home hemodialysis could increase if a single entity were financially responsible for all Medicare services provided to a Medicare patient. This incentive could increase because, according to DaVita, the cost of inpatient care may be lower for home hemodialysis patients than for in-center hemodialysis patients. We agree that choosing one type of dialysis over another could affect the use of other types of Medicare services, but examining such implications was beyond the scope of this report. NRAA and RPA appreciated that our report addressed the role of nephrology training programs in the use of home dialysis, and both groups said that we or CMS should further examine how physicians can receive greater exposure to home dialysis through these programs. RPA said that this examination could also address the role of Medicare payments for graduate medical education. While we acknowledge the importance of these issues, further examination of them was beyond the scope of our report. In addition to the comments described above, the patient and industry groups provided technical comments on the draft, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health & Human Services and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114, or cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of our report. GAO staff who made key contributions to this report are listed in appendix III. This appendix describes the data and methods we used for our analysis of Medicare margins, which was part of our effort to examine incentives associated with Medicare payments to dialysis facilities. We analyzed Medicare cost report data for 2012 from freestanding facilities located in the 50 states and the District of Columbia. We took steps to restrict our analysis to data from facilities with similar cost and payment structures. We did not include hospital-based facilities in our analysis because these facilities’ reported costs may be driven in part by hospitals’ methods for allocating overhead costs within these hospitals rather than by the costs of the dialysis facility itself. Because of possible differences in cost structures, we excluded facilities that (1) provided any pediatric or intermittent peritoneal dialysis treatments, (2) were government-owned, or (3) had cost reporting periods not equal to calendar year 2012, which generally occurred when facilities changed ownership, opened, closed, or changed Medicare status during the year. Because of possible differences in payment structures, we also limited our analysis to facilities that elected to be paid fully under the bundled payment system. Implementing these steps resulted in the exclusion of approximately 19 to 20 percent of the 5,380 freestanding facilities originally in the cost report data set. We also took several steps to assess the reliability of facilities’ cost report data on total costs, total Medicare payments, and the number of dialysis treatments provided. In particular, we checked for and excluded facilities with internal inconsistencies among variables such as reporting that they provided more treatments to Medicare patients than to Medicare and non- Medicare patients combined or reporting negative treatment numbers. In addition, we excluded facilities that reported unusually high or low average costs or average Medicare payments, which may be indicative of data entry errors. Finally, we compared the number of Medicare-covered treatments reported on the cost reports to similar data from Medicare claims on the number of paid treatments, and we excluded facilities with inconsistencies. Implementing these steps to assess the reliability of the data resulted in the exclusion of an additional approximately 8 to 9 percent of the 5,380 freestanding facilities originally in the cost report data set, leaving 3,891 (72 percent) of these facilities in our analysis. We focused our analysis primarily on the 1,569 of these 3,891 freestanding facilities that provided home dialysis (defined as either home hemodialysis and/or peritoneal dialysis) to Medicare dialysis patients in 2012. We determined that the data on total costs, total Medicare payments, and number of dialysis treatments provided were sufficiently reliable for the purposes of our analysis. (𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 𝑝𝑝𝑀𝑀𝑝𝑝𝑝𝑝𝑀𝑀𝑝𝑝𝑝𝑝𝑝𝑝− 𝐸𝐸𝑝𝑝𝑝𝑝𝑀𝑀𝑝𝑝𝑀𝑀𝑝𝑝𝑀𝑀𝑀𝑀 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 𝑀𝑀𝑐𝑐𝑝𝑝𝑝𝑝𝑝𝑝) We calculated the Medicare margin for all facilities that provided home dialysis. (See table 3.) When calculating the average margin for facilities in our analysis, we weighted the average by the total number of Medicare-covered patient years of dialysis. We classified facilities as small or large based on whether their number of Medicare patient years was below or above the median number of patient years among the facilities in our analysis that provided home dialysis. To examine incentives associated with each type of dialysis, we used multiple linear regression analysis to estimate the extent to which adding a patient year of peritoneal dialysis, home hemodialysis, and in-center hemodialysis was associated with an increase or decrease in facilities’ Medicare margins. The explanatory variables of our regression model included, for each type of dialysis, a binary variable for whether or not the facility provided that type of dialysis and a continuous variable with the number of patient years for that type of dialysis. To control for other factors that could affect a facility’s Medicare margin, our model also included binary variables for whether or not the facility was located in an urban area or whether or not the facility was affiliated with a large dialysis facility chain. See table 4 for more information about the characteristics included in the model. As shown in table 5 and discussed further in the report, the results of our regression model show the effect on facilities’ Medicare margin from adding one patient year of a given type of dialysis. In addition to the contact named above, William Black, Assistant Director; George Bogart; Andy Johnson; Corissa Kiyan; Hannah Marston Minter; Richard Lipinski; Elizabeth T. Morrison; Vikki Porter; and Eric Wedum made key contributions to this report.
In 2013, Medicare spent about $11.7 billion on dialysis care for about 376,000 Medicare patients with end-stage renal disease, a condition of permanent kidney failure. Some of these patients performed dialysis at home, and such patients may have increased autonomy and health-related quality of life. GAO was asked to study Medicare patients' use of home dialysis and key factors affecting its use. This report examines (1) trends in home dialysis use and estimates of the potential for wider use, (2) incentives for home dialysis associated with Medicare payments to dialysis facilities, and (3) incentives for home dialysis associated with Medicare payments to physicians. GAO reviewed CMS policies and relevant laws and regulations, and GAO analyzed data from CMS (2010-2015), the United States Renal Data System (1988-2012), and Medicare cost reports (2012), the most recent years with complete data available. GAO also interviewed CMS officials, selected dialysis facility chains, physician and patient associations, and experts on home dialysis. The percentage of dialysis patients who received home dialysis generally declined between 1988 and 2008 and then slightly increased thereafter through 2012, and stakeholder estimates suggest that future increases in the use of home dialysis are possible. Dialysis patients can receive treatments at home or in a facility. In 1988, 16 percent of 104,200 dialysis patients received home dialysis. Home dialysis use generally decreased over the next 20 years, reaching 9 percent in 2008, and then slightly increased to 11 percent of 450,600 dialysis patients in 2012—the most recent year of data for Medicare and non-Medicare patients. Physicians and other stakeholders estimated that 15 to 25 percent of patients could realistically be on home dialysis, suggesting that future increases in use are possible. In the short term, however, an ongoing shortage of supplies required for peritoneal dialysis—the most common type of home dialysis—reduced home dialysis use among Medicare patients from August 2014 to March 2015. Some stakeholders were also concerned the shortage could have a long-term impact. Medicare's payment policy likely gives facilities financial incentives to provide home dialysis, but these incentives may have a limited impact in the short term. According to the Centers for Medicare & Medicaid Services (CMS) within the Department of Health and Human Services (HHS), setting the facility payment for dialysis treatment at the same rate regardless of the type of dialysis gives facilities a powerful financial incentive to encourage the use of peritoneal dialysis when appropriate because it is generally less costly than other dialysis types. However, GAO found that facilities also have financial incentives in the short term to increase provision of hemodialysis in facilities, rather than increasing home dialysis. This is consistent with information from CMS and stakeholders GAO interviewed. For example, facilities may be able to add an in-center patient without paying for an additional dialysis machine, because each machine can be used by six to eight in-center patients. In contrast, for each new home patient, facilities may need to pay for an additional machine. The adequacy of Medicare payments for home dialysis training also affects facilities' financial incentives for home dialysis. Although CMS recently increased its payment for home dialysis training, it lacks reliable cost report data needed for effective fiscal management, which involves assessing payment adequacy. In particular, if training payments are inadequate, facilities may be less willing to provide home dialysis. Medicare payment policies may constrain physicians' prescribing of home dialysis. Specifically, Medicare's monthly payments to physicians for managing the care of home patients are often lower than for managing in-center patients even though physician stakeholders generally said that the time required may be similar. Medicare also pays for predialysis education—the Kidney Disease Education (KDE) benefit—which could help patients learn about home dialysis. However, less than 2 percent of eligible Medicare patients received the benefit in 2010 and 2011, and use has declined since then. According to stakeholders, the low usage was due to statutory limitations in the categories of providers and patients eligible for the benefit. CMS has established a goal of encouraging home dialysis use among patients for whom it is appropriate, but the differing monthly payments and low usage of the KDE benefit could undermine this goal. GAO recommends that CMS (1) take steps to improve the reliability of the cost report data, (2) examine and, if necessary, revise policies for paying physicians to manage the care of dialysis patients, and (3) examine and, if appropriate, seek legislation to revise the KDE benefit. HHS concurred with the first two recommendations but did not concur with the third. GAO continues to believe this recommendation is valid as discussed further in this report.
To date, the Congress has designated 24 national heritage areas, primarily in the eastern half of the country. Generally, national heritage areas focus on local efforts to preserve and interpret the role that certain sites, events, and resources have played in local history and their significance in the broader national context. Heritage areas share many similarities—such as recreational resources and historic sites—with national parks and other park system units but lack the stature and national significance to qualify them as these units. The process of becoming a national heritage area usually begins when local residents, businesses, and governments ask the Park Service, within the Department of the Interior, or the Congress for help in preserving their local heritage and resources. In response, although the Park Service currently has no program governing these activities, the agency provides technical assistance, such as conducting or reviewing studies to determine an area’s eligibility for heritage area status. The Congress then may designate the site as a national heritage area and set up a management entity for it. This entity could be a state or local governmental agency, an independent federal commission, or a private nonprofit corporation. Usually within 3 years of designation, the area is required to develop a management plan, which is to detail, among other things, the area’s goals and its plans for achieving those goals. The Park Service then reviews these plans, which must be approved by the Secretary of the Interior. After the Congress designates a heritage area, the Park Service enters into a cooperative agreement with the area’s management entity to assist the local community in organizing and planning the area. Each area can receive funding—generally limited to not more than $1 million a year for 10 or 15 years—through the Park Service’s budget. The agency allocates the funds to the area through the cooperative agreement. As proposed, S. 2543 would establish a systematic process for determining the suitability of proposed sites as national heritage areas and for designating those areas found to be qualified. In our March 2004 testimony, we stated that no systematic process exists for identifying qualified candidate sites and designating them as national heritage areas. We noted that, while the Congress generally has made designation decisions with the advice of the Park Service, it has, in some instances, designated heritage areas before the agency has fully evaluated them. Specifically, the Congress designated 10 of the 24 existing heritage areas without a thorough Park Service review of their qualifications and, in 6 of the 10 cases, the agency had recommended deferring action. S. 2543, however, would create a more systematic process that would make the Congress’ designation of a heritage area contingent on the prior completion of a suitability-feasibility study and the Secretary’s determination that the area meets certain criteria. In addition, under S. 2543, the Secretary could recommend against designation of a proposed heritage area based on the potential budgetary impact of the designation or other factors. Provisions in S. 2543 identify a number of criteria for the Secretary to use in determining a site’s suitability and feasibility as a national heritage area, including its national significance to the nation’s heritage and whether it provides outstanding recreational or educational opportunities. S. 2543 defines a heritage area as an area designated by the Congress that is nationally significant to the heritage of the United States and meets the other criteria specified in the bill. Further, S. 2543 defines national significance as possessing unique natural, historical, and other resources of exceptional value or quality and a high degree of integrity of location, setting, or association in illustrating or interpreting the heritage of the United States. Despite these very specific definitions, however, the criteria outlined in S. 2543 for determining an area’s suitability are very similar to those currently used by the Park Service. Our March 2004 testimony pointed out that these criteria are not specific enough to determine areas’ suitability. For example, one criterion states that a proposed area should reflect “traditions, customs, beliefs, and folk life that are a valuable part of the national story.” These criteria are open to interpretation and, using them, the agency has eliminated few sites as prospective heritage areas. As we stated in March, officials in the Park Service’s Northeast region, for example, believe the criteria are inadequate for screening purposes. The Park Service’s heritage area national coordinator believes, however, that the criteria are valuable but that the regions need additional guidance to apply them more consistently. The Park Service has recently developed guidance for applying these criteria, which will help to clarify how both the existing criteria and the criteria proposed in S. 2543 could be applied to better determine the suitability of a prospective heritage area. S. 2543 would impose some limits on the amount of federal funds that can be provided to national heritage areas through the National Park Service’s budget. In our March 2004 testimony, we stated that from fiscal years 1997 through 2002 about half of heritage areas’ funding came from the federal government. According to data from 22 of the 24 heritage areas, the areas received about $310 million in total funding. Of this total, about $154 million came from state and local governments and private sources and another $156 million came from the federal government. Over $50 million was dedicated heritage area funds provided through the Park Service, with another $44 million coming from other Park Service programs and about $61 million from 11 other federal sources. We also pointed out that the federal government’s total funding to these heritage areas increased from about $14 million in fiscal year 1997 to about $28 million in fiscal year 2002, peaking at over $34 million in fiscal year 2000. Table 1 shows the areas’ funding sources from fiscal years 1997 through 2002. S. 2543 restricts the funding for heritage areas that is allocated through the Park Service’s budget to $15 million for each fiscal year. Of this amount, not more than $1 million may be provided to an individual area in a given fiscal year and not more than $10 million over 15 years. For any fiscal year, the costs for oversight and administrative purposes cannot exceed more than 5 percent of the total funds. While this provision restricts the amount of federal funds passing from the Park Service—the largest provider of federal funds—to the heritage areas, these areas can obtain funding from other federal agencies as well. In March, we also pointed out that, generally, each area’s designating legislation imposes sunset provisions to limit the amount of federal funds provided to each heritage area. However, since 1984, five areas that reached their sunset dates had their funding extended. S. 2543 establishes a fixed time frame after which no additional funding, except for technical assistance and administrative oversight, will be provided. Specifically, it states that the Secretary of the Interior can no longer provide financial assistance after 15 years from the date that the local coordinating, or management, entity first received assistance. S. 2543 includes a number of provisions that could enhance the Park Service’s ability to hold national heritage areas accountable for their use of federal funds. In March, we stated that the Park Service oversees heritage areas’ activities by monitoring their implementation of the terms set forth in cooperative agreements. These terms, however, did not include several key management controls. That is, the agency had not (1) always reviewed areas’ financial audit reports, (2) developed consistent standards for reviewing areas’ management plans, and (3) developed results-oriented goals and measures for the agency’s heritage area activities, or required the areas to adopt a similar approach. Park Service officials said that the agency has not taken these actions because, without a program, it lacks adequate direction and funding. We recommended that, in the absence of a formal heritage area program within the Park Service, the Secretary of the Interior direct the Park Service to develop well-defined, consistent standards and processes for regional staff to use in reviewing and approving heritage areas’ management plans; require regional heritage area managers to regularly and consistently review heritage areas’ annual financial reports to ensure that the agency has a full accounting of their use of funds from all federal sources; develop results-oriented performance goals and measures for the agency’s heritage area activities, and require, in the cooperative agreements, that heritage areas adopt such a results-oriented management approach as well. S. 2543 takes several steps that will enhance accountability. In this regard, S. 2543 establishes a formal program for national heritage areas to be administered by the Secretary of the Interior. By establishing this program, the bill would provide the Park Service with the direction and funding that agency officials believe they need to impose management controls on their own and heritage areas’ activities. Furthermore, S. 2543 includes a number of provisions that address the concerns we raised in March. First, the bill establishes a schedule and criteria for reviewing and approving or disapproving heritage areas’ management plans. The Secretary must approve or disapprove the management plan within 180 days of receiving it. If disapproved, the Secretary must advise the local coordinating entity in writing of the reason for disapproval and may make recommendations for revision. After receiving a revised management plan, the Secretary must approve or disapprove the revised plan within 180 days. In addition, the bill identifies criteria that the Secretary is to use in determining whether to approve an area’s plan. This is a positive step towards establishing the well-defined, consistent standards and processes for reviewing and approving areas’ management plans that we recommended in March. S. 2543 also requires that the management plans include information on, among others, performance goals, the roles and functions of partners, and specific commitments by the partners to accomplish the activities outlined in the management plan. Furthermore, to ensure better accountability, the local coordinating entity must submit an annual report to the Secretary for each fiscal year for which the entity receives federal funds. This report must specify, among other things, the local coordinating entity’s performance goals and accomplishments, expenses and income, amount and sources of matching funds, amounts and sources of leveraged federal funds, and grants made to any other entity during the fiscal year. While provisions contained in S. 2543 address some of the issues we raised in our March testimony, they do not require that the Park Service consistently review areas’ financial audit reports or develop results- oriented goals and measures for the agency’s heritage area activities as we recommended in March. We continue to believe that these are important management controls that are necessary to ensure effective oversight and accountability. S. 2543 includes provisions to ensure that property owners’ rights and land use are not restricted by the establishment of national heritage areas. In our March testimony, we stated that national heritage areas do not appear to have affected property owners’ rights. In fact, the designating legislation of 13 areas and the management plans of at least 6 provide assurances that such rights will be protected. However, property rights advocates are concerned about the effects of provisions in some management plans that encourage local governments to implement land use policies that are consistent with the heritage areas’ plans. Some advocates are concerned that these provisions may allow the heritage areas to indirectly influence zoning and land use planning in ways that could restrict owners’ use of their property. S. 2543 provides property owners the right to refrain from participating in any planned project or activity conducted within the national heritage area. Furthermore, it does not require any property owner to permit public access, nor does it modify public access under any other federal, state, or local law. It also does not alter any adopted land use regulation, approved land use plan, or other regulatory authority of any federal, state, or local authority. The growing interest in creating new heritage areas has raised concerns that their numbers may expand rapidly and significantly increase the amount of federal funds supporting them. A significant increase in new areas would put increasing pressure on the Park Service’s resources. Therefore, it is important to ensure that only those sites that are most qualified are designated as heritage areas. However, as we noted in March, no systematic process for designating these areas exists, and the Park Service does not have well-defined criteria for assessing sites’ qualifications or provide effective oversight of the areas’ use of federal funds and adherence to their management plans. As a result, the Congress and the public cannot be assured that future sites will have the necessary resources and local support needed to be viable or that federal funds supporting them will be well spent. Park Service officials pointed to the absence of a formal program as a significant obstacle to effective management of the agency’s heritage area efforts and oversight of the areas’ activities. As a result, the Park Service is constrained in its ability to determine both the agency’s and areas’ accomplishments, whether the agency’s resources are being employed efficiently and effectively, and if federal funds could be better utilized to accomplish its goals. Several of the provisions in S. 2543 represent positive steps towards addressing the concerns we raised in March. In particular, by establishing a formal program, the bill would remove the obstacle to effective management and oversight identified by agency officials. Furthermore, by establishing a more systematic process for designating heritage areas, S 2543’s provisions can help to ensure that only the most qualified sites become heritage areas. In addition, by placing a $15 million per year cap on funding to the heritage areas through the Park Service, the bill limits the federal government’s funding commitment to these areas. Finally, provisions in S. 2543 would enhance the Park Service’s ability to oversee and hold areas accountable for their use of federal funds by establishing criteria for reviewing and approving areas’ management plans and by requiring heritage areas to annually report on performance goals and accomplishments. To ensure greater accountability for the use of federal funds, the Congress may wish to consider amending S. 2543 by adding provisions directing the Secretary to (1) review heritage areas’ annual financial reports to ensure that the agency has a full accounting of heritage area funds from all federal sources, and (2) develop results-oriented performance goals and measures for the Park Service’s overall heritage area program. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or other Members of the Subcommittee may have. For more information on this testimony, please contact Barry T. Hill at (202) 512-3841. Individuals making key contributions to this testimony included Preston S. Heard, Roy K. Judy, and Vincent P. Price. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Congress has established, or "designated," 24 national heritage areas to recognize the value of their local traditions, history, and resources to the nation's heritage. These areas, including public and private lands, receive funds and assistance through cooperative agreements with the National Park Service, which has no formal program for them. They also receive funds from other agencies and nonfederal sources, and are managed by local entities. Growing interest in new areas has raised concerns about rising federal costs and the risk of limits on private land use. GAO was asked to comment on how provisions of S. 2543 might affect issues identified in GAO's March 2004 testimony addressing the process for (1) designating heritage areas, (2) determining the amount of federal funding to these areas, (3) overseeing areas' activities and use of federal funds, and (4) determining the effects, if any, they have on private property rights. Provisions of S. 2543 would establish a systematic process for identifying and designating national heritage areas, addressing many of the concerns identified in GAO's March 2004 testimony. At that time, GAO reported that no such systematic process exists, noting that the Congress has, in some instances, designated heritage areas before the Park Service has fully evaluated them. S. 2543 contains provisions that would require that a suitability study be completed and the Park Service determine the area meets certain criteria before the Congress designates a heritage area. While the bill defines heritage areas more specifically in terms of their national significance, the criteria outlined in S. 2543 will benefit from guidance that the Park Service has recently developed to guide the application of the criteria. This guidance will improve the designation process. Provisions of S. 2543 would limit the amount of federal funds that can be provided to heritage areas through the Park Service's budget. In March 2004, GAO testified that from fiscal years 1997 through 2002 about half of heritage areas' funding came from the federal government. Specifically, for 22 of the 24 heritage areas where data were available, $156 million of the areas' $310 million in total funding came from the federal government. Of this, over $50 million came from Park Service funds dedicated for this purpose, $44 million from other Park Service programs, and about $61 million from 11 other federal sources. S. 2543 would restrict annual dedicated Park Service funding for heritage areas to $15 million. Individual areas may not receive more than $1 million in a given fiscal year and $10 million over 15 years. Furthermore, S. 2543 includes provisions that could enhance the Park Service's ability to hold heritage areas accountable for their use of federal funds. In this regard, S. 2543 (1) establishes a program that would provide the Park Service with the direction and funding needed to manage the agency's and the heritage areas' activities; (2) establishes a schedule and criteria for reviewing and approving heritage areas' management plans; (3) identifies criteria for use in reviewing areas' plans; (4) requires that the plans include information on, among other things, performance goals and the roles and functions of partners; and (5) requires areas to submit annual reports specifying, among other things, performance goals and accomplishments, expenses and income, and amounts and sources of funds. GAO has identified potential amendments to S. 2543 that would further enhance areas' accountability. S. 2543 includes provisions that address some of the concerns GAO identified in March with regard to heritage areas' potential restrictions on property owners' rights and land use. For example, S. 2543 allows property owners to refrain from participating in any planned project or activity within the heritage area. Furthermore, the bill does not require any owner to permit public access to property and does not alter any existing land use regulation, approved land use plan, or other regulatory authority.
Before commercialization, air navigation services under government control faced increasing strain. Many were underfunded, as evidenced by air traffic controller wage freezes and insufficient funds to replace aging technologies. In some instances, the country as a whole faced widespread fiscal problems and the commercialization of air navigation services was simply part of a larger movement to reform government enterprises such as rail, telecommunications, and electricity. With commercialization, the government typically retains full or partial ownership of the air navigation system and continues to regulate operational safety, but an independent ANSP is responsible for operating the system. The independent ANSP is subject to corporate financial and accounting rules and, in line with today’s current management theories, is generally designed as a performance-based organization—that is, an organization that develops strategies, goals, and measures and gathers and reports data to demonstrate its performance. In the five countries whose air navigation services we reviewed, the ANSP continued to provide nationwide services after commercialization and, with certain exceptions, remained the sole provider of air navigation services. Each ANSP offers en route, approach control, and terminal air traffic services. However, in some cases, an ANSP may not be the sole provider of approach control and terminal services in a country. Although technical definitions may vary slightly among ANSPs, these services broadly correspond to the services provided in U.S. air traffic centers, approach control centers, and towers. All but Germany’s DFS also offer oceanic air navigation services. All five ANSPs are responsible for providing air traffic services to both civil and military aviation. In addition, the ANSPs may offer other air-navigation-related services, such as meteorological services, fire and rescue, training, and consulting. The ANSPs also charge for these services. Discussions about the commercialization of air navigation services often use a number of terms interchangeably. Among these terms are restructuring, privatization, outsourcing, and corporatization, as well as commercialization. The Civil Air Navigation Services Organization (CANSO), which represents the interests of ANSPs worldwide, uses the term corporatization. Others, such as the International Civil Aviation Organization (ICAO), which establishes international civil aviation standards and recommends practices and procedures for ANSPs, use the term commercialization. Some note that an organization can be “commercialized” but not “corporatized” (i.e., established under prevailing company law). For this statement, we will use “commercialization.” The five commercialized ANSPs that we reviewed have a number of common characteristics: All operate as businesses rather than as government organizations, all focus on safety, and all are largely monopoly providers that are subject to some form of economic review or guidelines for setting prices. All five commercialized ANSPs operate as businesses, although they differ somewhat in their ownership structures. (See table 1.) Three of the five— Airservices Australia, Airways Corporation of New Zealand, and DFS—are currently state-owned corporations—that is, companies wholly owned by the government. The UK’s National Air Traffic Services (NATS) is a public- private partnership, that is, a cooperative venture between the public and private sectors that is designed to meet defined public needs with the risks and rewards divided between both parties. The government holds the largest share of NATS (49 percent), and the remaining shares are divided among a consortium of seven UK airlines (42 percent), NATS staff (5 percent), and a private airport company (4 percent). By 2006, Germany plans to change the ownership of DFS, selling 74.9 percent of its equity to private investors and reorganizing it as a public-private partnership, along the lines followed in the UK. NAV CANADA is a nonshare capital, private corporation—that is, it has “members” instead of shareholders. These members represent the airline industry, the government, and general and business aviation, and they also include employees such as air traffic controllers and engineers. Each ANSP makes and carries out its own strategic, operating, and financial decisions. A supervisory board oversees policy making and operations and, when applicable, has fiduciary responsibilities to shareholders. The members of this board may represent key stakeholders, such as the airlines, employees, general aviation, and the national government. An executive officer implements the board’s policies and is in turn, accountable to the board. Individual business units within the ANSP report to the executive officer and are directly responsible for various aspects of the ANSP’s day-to-day operations. As commercial organizations, the ANSPs follow corporate practices. Each ANSP has established performance measures and gathers and reports financial and other performance data. Each ANSP also publishes an annual report, which makes financial information available to the public to ensure transparency. Financial statements are typically subject to third-party audit to ensure that adequate accounting records have been maintained and that internal controls have prevented and detected any fraud and error in the accounting policies and estimates. In addition, the UK and Germany report their data to EUROCONTROL’s Performance Review Commission, which collects data for benchmarking and publishes comparative studies of members’ performance. Before commercialization, two of the five ANSPs “purchased” the ANSP assets from their government. NAV CANADA negotiated a selling price with the Canadian government, rather than going through a formal competitive bidding process, and purchased the air navigation system in 1996 for C$1.5 billion. In the UK, according to information from the National Audit Office, a collection of seven UK airlines known as “The Airline Group” provided £795 million of funds, partly from its own resources (£65 million) and from a loan taken out with a consortium led by four main banks. The group used these funds to acquire NATS and meet associated transaction costs, leaving £3.5 million of cash in the business. In total, the government received £758 million in cash proceeds from the transaction. All five commercialized ANSPs rely on user charges as their primary source of revenue and on private capital markets for additional funding. Before commercialization, governments funded air traffic control services through annual appropriations from their national government. All five ANSPs collect and manage their own revenues, charging fees for services. Their air navigation service fees are based on ICAO’s cost recovery principles, which call for recovering the ANSP’s operating costs. Despite some variation across ANSPs, the fees are generally as follows: The air navigation fees cover operating and capital costs associated with both en route and terminal services. These charges are based on a weight- distance formula. If applicable, ANSPs also levy charges for oceanic control. ANSPs may also charge for tower-related services. However, not all ANSPs are the sole providers of tower services. In the UK and Germany, for example, private firms may provide tower services. These tower charges are distinct from the landing fees typically charged by airports, which are usually weight-based. ANSPs may charge general aviation operators a flat fee for services or additional fees in particular circumstances rather than charging the weight-distance fees typically assessed to larger air carriers. ANSPs may also charge additional fees, as applicable, for other services, such as meteorological, aeronautical information, training, and consulting services. The five ANSPs vary in their treatment of any operating profits or losses. If an ANSP generates revenues from charges in excess of its costs (i.e., operating profits), it may rebate them to the users, lower the charges for the next year, pay some form of dividend to shareholders, or retain them in reserve to protect against future losses. If costs exceed revenues, ANSPs use different strategies to meet those shortfalls. For example, NAV CANADA established a “rate stabilization fund,” which it used to store revenues when the aviation industry was healthy. The fund could then be used to cover costs and keep rates stabilized when the industry was ailing. The fund was capitalized by operating profits earned before September 11, 2001, but depleted following the economic downturn caused by the events of September 11 and the SARS outbreak of 2003. In 2003, the rate stabilization fund had reached a cumulative deficit of C$116 million. According to NAV Canada’s 2004 annual report, the C$116 million deficit has been reduced to C$32 million. In the UK, NATS, which experienced a major decline in transatlantic traffic after September 11, first obtained a ₤60 million short-term loan from its lending banks and then refinanced, bringing in a new equity partner (BAA, plc.). To pay for capital projects, the five ANSPs can either use current operating revenues or borrow funds. Before commercialization, the ANSPs relied on annual appropriations for capital projects; now, all five can borrow funds through access to private capital and debt financing. For example, NAV CANADA can seek debt financing in private markets. NAV CANADA has a borrowing capacity of C$2.9 billion. In Germany, DFS mainly finances its capital expenditures by drawing on a capital market program, which issues short-, medium- , or long-term notes (i.e., debt issuance and commercial paper) each amounting to € 500 million for a total of € 1 billion to private investors in the market. DFS can also draw on an annual credit line of €161 million from its bank. Stakeholders, including employees, as well as the airlines, general aviation operators, airports, the government, the public, and others, may be involved in their ANSP through a variety of mechanisms. In Europe, for example, the Single European Sky initiative directs member states to establish a consultation mechanism for involving stakeholders. Germany and the UK have followed this direction by including stakeholder representatives on their ANSP’s board of directors. For example, in Germany, DFS employees, government ministries, and the private sector are represented on a supervisory board. In the UK, government appointees, the airlines, and BAA, plc. (the airport consortium) are represented on NATS’s board. In Australia, the aviation community (e.g., the airports, airlines, safety authorities, and others) has a role in the air traffic procurement process through the Australian Strategic Air Traffic Management Group (ASTRA). For all five commercialized ANSPs, safety remains the primary goal. In some countries, government policy requires that the ANSP consider safety in any and all decisions affecting operations and service. For example, in Germany, legislation requires DFS to observe ICAO’s standards and recommended safety practices, as well as adhere to the objectives and policies of international organizations where the German government is represented, such as EUROCONTROL. Similarly, in Canada, legislation requires NAV CANADA to maintain a fixed level of safety. Under the Civil Air Navigation Services Commercialization Act, the Minister of Transport has the authority to direct NAV CANADA to maintain or increase levels of service in the interest of safety. Although it can alter operations in accordance with business principles, it must demonstrate that the changes meet the required level of safety through an aeronautical risk assessment. All five ANSPs are subject to external safety regulation. A separate authority conducts safety regulation and issues relevant certifications or licenses to air traffic controllers and technicians. In New Zealand, for example, the Civil Aviation Authority (CAA) is an independent regulatory authority that establishes civil aviation safety and security standards and monitors adherence to those standards. CAA carries out accident and incident investigations and uses information from these investigations to establish an industrywide safety picture and develop safety initiatives ranging from education campaigns to increased monitoring and regulatory action. All five selected ANSPs have established formal safety programs. For example, Airservices Australia employs a surveillance model, which includes incident investigation, trend analysis, system review, and internal audit. Similarly, DFS and NATS apply a systematic Safety Management System to all of its operational activities. The system forms the basis for risk assessment, safety assurance, safety control and safety monitoring through standards that comply with national and international obligations. Each of the five commercialized ANSPs is its country’s sole provider of en route navigation services. There is no opportunity for more than one organization to provide competing air navigation services. Thus, operators cannot choose alternative providers by changing routes. To forestall the abuse of monopoly position and address concerns about the level of prices or charges, the five ANSPs are subject to the following: In the UK, the Civil Aviation Authority (CAA) exercises economic regulation over NATS. CAA’s Economic Regulation Group sets price caps for 5-year periods, basing them generally on the retail price index and the group’s own analyses of allowances for NATS’ estimated operating and capital costs. The Australian Competition and Consumer Commission (ACCC), an independent commonwealth authority, monitors primarily monopolistic public and private service industries, including Airservices Australia. ACCC oversees Airservices Australia’s process of setting user fees for air traffic services and decides to accept or reject price changes on the basis of public consultation and its own evaluation of Airservices’ pricing proposals. Airways Corporation of New Zealand operates under a memorandum of understanding with its airline users. Under this memorandum, Airways uses the principle of “Economic Value Added” (EVA) to self-regulate its pricing. EVA is the difference between net operating profit after taxes minus the cost of capital. EVA above a certain level is returned to users in the form of a rebate. The German Transport Ministry reviews and approves any changes in user fees, but does not independently evaluate the price-setting process or pricing changes. According to the Transport Ministry, Germany plans to create an independent economic regulatory authority by next year to comply with the requirements of the forthcoming Single European Sky initiative. The Canadian Transportation Agency (CTA) reviews the price-setting process against an established set of principles. However, CTA does not respond to user grievances about existing prices. NAV CANADA is legislatively required to place all revenues in excess of costs in its rate stabilization fund. Based on information from each of the ANSPs we reviewed, following commercialization, air navigation safety has not declined, and all five ANSPs have taken steps to control costs. In addition, the ANSPs have improved the efficiency of their operations through the implementation of new technologies and equipment. According to the ANSPs, some of these outcomes would not have been feasible in a government organization. At a minimum, safety has not eroded since commercialization, according to the available data from of each of the five ANSPs. For example, data from Airways Corporation of New Zealand indicate a downward trend in incidents involving loss of separation for the years following commercialization. Similarly, according to NAV CANADA’s annual report for 2004, the rate of loss-of-separation incidents decreased from 1999/2000 through 2003/2004. Officials at Transport Canada, the safety regulator, confirm an overall decline in aviation incidents since commercialization. Additionally, stakeholders have anecdotally reported that they believe the air navigation system is as safe as it was when the government provided air navigation services. According to some, the separation of operating and regulatory functions has strengthened safety regulation and diminished any potential conflict of interest between promoting the financial interests of aviation operators and protecting safety. As improved technology and system upgrades have allowed individual controllers to handle increasing levels of air traffic, concerns have arisen about the potential for controllers’ fatigue to compromise safety. Data are not available to assess this potential, but some ANSPs have taken steps to limit and monitor controllers’ workload. For example, the UK’s CAA has regulated the hours of civil air traffic controllers, and its Safety Regulation Group must be notified of any breach by NATS or by controllers. In New Zealand, as air traffic has increased, some airspace sectors have been subdivided so that controllers are responsible for a smaller piece of airspace. To lower their personnel costs, all five ANSPs have reduced their administrative staff or flattened their management organizations. For example, NAV CANADA closed most of its regional administrative offices and centralized corporate functions to its headquarters, reducing mostly administrative staff by 1,100 people (17 percent of the workforce). Airways Corporation of New Zealand also reportedly reduced its personnel costs by eliminating some middle management and administrative positions. In general, the ANSPs have not reduced their air traffic controller staffs. To lower their facility operating costs, all five ANSPs have closed, relocated, or consolidated facilities. For example, Airways Corporation of New Zealand reported consolidating four radar centers into two over 8 years and is planning to consolidate these two into a single radar center by 2006. DFS has also integrated operations and consolidated facilities. Seventeen approach units have been integrated from the airports to the four air traffic control centers. It relocated the Dusseldorf control center to the Langen control center in 2002, a year earlier than planned, and transferred and consolidated its headquarters from Offenbach to Langen. DFS reports that, because its supervisory board now makes major investment decisions, rather than a parliamentary committee, it has been able to make key strategic decisions that would have been politically difficult when DFS was under government control. In the UK, NATS reduced its net operating costs by almost ₤96 million during 2002 through 2004, in part through direct management actions. For example, it consolidated two operations into one at the new air navigation services center called the Swanwick Center. According to NATS, it reduced its staff costs by ₤12 million and its costs for services and materials by about ₤11 million between 2002 and 2003, after placing this new center in service. Between 2003 and 2004, NATS reported reducing its operating costs for air traffic services by another ₤13 million through cost control measures. All five ANSPs have purchased new equipment and technologies that they say have improved productivity. For example, Airservices Australia reported increases in controllers’ productivity following the introduction of the Australian Advanced Air Traffic System (TAAATS). This system replaced conventional radar screens with more advanced computer screens that display data from a range of sources, including ground based surveillance equipment and satellite-linked navigational equipment on aircraft, among others. TAAATS replaced handwritten paper flight progress strips with screen-based information that is updated automatically. DFS is also eliminating systems that depend on paper strips and anticipates productivity gains and cost savings as a result. In New Zealand, according to the union that represents air traffic controllers, individual controllers are now able to handle much more flight activity because of improved technology. Besides improving productivity, modernization, together with airspace redesign, has produced operational efficiencies, including fewer and shorter delays, according to the ANSPs. Commercialization has allowed the ANSPs to implement modernization projects more efficiently. Formerly, the uncertainty associated with the annual appropriations from national governments made it difficult to plan over multiple years. With access to cash flow and borrowed funds, the ANSPs report that they have been able to plan and execute projects more efficiently and have seen improvements in delivering projects on time, within budget, and to specification. For example, Airways Corporation of New Zealand deployed its new oceanic system, FANS1, in less than a year. The management of NAV CANADA estimates that it is producing new technology faster than the government once did and at half the cost. Some of the commercialized ANSPs maintain that they have achieved the benefits of modernization faster and at less cost by purchasing commercially available systems and upgrades or by modifying off-the-shelf technologies to meet their needs, rather than developing their own systems from the ground up. NATS purchased its oceanic system and automated tower/terminal control system from NAV CANADA. To achieve further purchasing efficiencies, some commercialized European ANSPs have developed an alliance to procure systems. For instance, Germany has developed a strategic alliance with Switzerland and the Netherlands for the joint procurement of a new radar system. Through their cost control initiatives and modernization efforts, some of the ANSPs have been able to lower their unit costs and, in turn, lower their charges to major commercial airlines, which pay the largest proportion of user fees and therefore are the primary users served by the ANSPs. Airservices Australia, for example, reported lower unit costs resulting from the increases in controllers’ productivity that followed the introduction of TAAATS. NAV CANADA estimates that it is saving the airlines approximately C$100 million annually in reduced aircraft operating costs. According to NAV CANADA, the airlines are now paying 20 percent less in user fees than it formerly paid in ticket taxes when the government provided air navigation services. In Germany, Lufthansa stated that except in business years 2001 through 2003, it has paid less in user fees than it paid during the initial commercialization of Germany’s air navigation service. According to Airways Corporation of New Zealand, it reduced en route charges by 22 percent in 1995 and another 13 percent since 1997, resulting in an overall reduction of more than 30 percent. However, for general aviation operators, commercialization has sometimes meant an increase in fees. Before commercialization, many only paid taxes on fuel. Some countries, such as Canada and New Zealand, have tried to make the fees affordable for small operators by charging a flat fee. NAV CANADA, for instance, charges general aviation operators a flat annual fee of C$72. According to the Aircraft Owners and Pilots Association—New Zealand, Airways Corporation of New Zealand charges general aviation operators a fee of NZ$100 for 50 landings. In addition, Airways eliminated the en-route charge for light aircraft. Some governments have subsidized air navigation services at small, remote, general aviation, and regional airports, viewing such services as a public good. Australia, for instance, provides a subsidy for service to some remote areas under the Remote Air Subsidy Scheme. Similarly, to protect service to remote locations and ensure equity of service to smaller communities, Canada legislatively requires NAV CANADA to maintain service to such locations. For instance, service to the Northern region, which is designated as “remote,” is guaranteed under the legislation. In addition, NAV CANADA is required to price services to remote locations on the same basis as service to the rest of the country. Through our research, we made a number of initial observations about the commercialization of air navigation services in the five countries we selected. The following paragraphs summarize these observations. Following commercialization, two changes—shifting the source of funding from appropriations to user fees and allowing the ANSPs to borrow money on the open market—have generally enabled commercialized ANSPs to cover their operating and capital costs. However, user fees and borrowing may not be sufficient to cover an ANSP’s costs during an industry downturn. As a result, a contingency fund or other mechanism may help to offset the effects of a downturn, although it may not do so completely if the effects are severe. When the economy began to stagnate in 2000 and air traffic began to decline, revenues from ANSP user fees began to fall. These revenue losses grew as transatlantic traffic declined after September 11, particularly affecting some ANSPs. In the UK, as a result of both these losses and the relatively high debt that it had assumed to commercialize, NATS’s solvency was threatened. Ultimately, NATS refinanced its debt with the concurrence of the Department for Transport and other shareholders. In Germany, DFS also experienced revenue losses, but to a lesser degree. DFS reported a loss of more than €33 million in 2001, when air traffic declined by 0.9 percent over the previous year. In 2002, it sustained a loss of more than €21 million, when air traffic levels fell 2.9 percent below 2001 levels. To address these deficits, DFS modified investments, canceled projects, and ultimately raised fees, thereby increasing financial pressures on the airlines. However, when air traffic increased again in 2003, DFS recorded an operating profit of more than €80 million and reduced fees for 2005 en route by 19.5 percent and terminal charges by 28 percent. DFS has begun to consider the benefits of a reserve fund, but German legislation governing air navigation service charges must be changed before DFS will be allowed to develop such a reserve. NAV CANADA had banked up to C$75 million in its rate stabilization fund before September 11 and the concerns about SARS. However, following the severe industry downturn resulting from these two events, the fund was quickly exhausted. Because the ANSP is typically the sole provider of en route and approach control services in a country, some mechanism may be necessary to keep prices in check. Since user fees constitute the ANSP’s primary source of revenue, economic monitoring and regulation by an independent third party can protect users and ensure a fair pricing process. Such an entity can ensure that all parties’ interests are taken into account and a variety of alternatives are considered. It can also provide assurance to users that price levels are appropriate, do not reflect overcharging, and are consistent with competitive practices. ICAO recognizes the need for an independent mechanism to provide economic regulation of air navigation services. According to ICAO, the objectives of economic regulation should include the following: Ensure nondiscrimination in the application of charges. Ensure that there is no overcharging or other anticompetitive practice. Ensure the transparency and availability of all financial data used to determine the basis for charges. Assess and encourage efficiency and efficacy in the operation of providers. Establish standards for reviewing the quality and level of services. Monitor and encourage investments to meet future demand. Ensure user views are adequately taken into account. Australia and Canada have taken different approaches to reviewing their ANSPs’ user charges and price setting. In Australia, the Australian Competition and Consumer Commission (ACCC) oversees price changes. Airservices Australia must notify ACCC whenever it wants to raise fees. Following a formal notification and vetting process, ACCC decides to accept or reject the price change on the basis of its evaluation of Airservices’ pricing proposal; and if they reject the proposed price, they can set a lower price. Recently, the ACCC rejected a proposal by Airservices for a temporary fee increase to address the revenue losses that followed September 11 and the SARS outbreak, as well as the collapse of Australia’s second largest airline. In rejecting the proposal, ACCC considered the fact that the industry took exception to these increases, raising concerns about the need for longer-term price certainty. ACCC ruled in favor of the industry and rejected the temporary price increases, instead deciding that a longer-term arrangement be considered. ACCC directed Airservices to focus on 5-year pricing plans to encourage long- term planning, emphasizing that the robustness of the airlines should be taken into account when a price is set. Canada has no formal regulation of fee setting. According to the Office of the Auditor General, the Canadian Transportation Agency (CTA), the formal appeal agency, can intervene only in matters concerning the price– setting process, not price levels or price changes. CTA was not given authority over price-setting issues to ensure that NAV CANADA could maintain a good credit rating, thus making NAV CANADA appealing to financiers. (As of April 2005, NAV CANADA’s bonds were rated AA–nearly as high as the government’s AAA-rated bonds.) NAV CANADA’s board of directors, which includes air carrier representatives, is the main venue for the industry to express any grievances over pricing issues. However, according to Air Canada, its input on the board is limited and, because the public has comparable representation on the board, the public and the industry cancel out each other’s input. When NAV CANADA raised prices after its rate stabilization fund was exhausted during the economic downturn, air carriers argued that this move further disrupted their business cycle during a time of financial strain. CAA officials said they must ensure that society’s broader interests are protected. In particular, GAO believes addressing the concerns of air traffic controllers was essential because they play a vital role in the air navigation system. For several of the ANSPs we reviewed, controllers’ support of commercialization was crucial to move the process forward. In New Zealand, controllers supported commercialization when faced with an aging system and inadequate public funds to acquire new equipment. Controllers in Canada supported the transition following a 5-year salary freeze and hiring freezes. However, Canadian controllers’ support for commercialization has diminished, mainly because of differences over collective bargaining issues such as wage increases, the right to strike and controller fatigue. The Canadian controllers have acknowledged that they were instrumental in pushing for change, but they have also noted that the results of commercialization have fallen short of their expectations. ANSPs have also noted the importance of involving stakeholders in efforts to design, acquire, and deploy new technologies. According to Airservices Australia, its air traffic controllers have come to understand the commercial imperative to make a return on investment. Similarly, Airways Corporation of New Zealand notes that it is essential to involve the same controllers throughout the design process so that there is consistency in requirements and a thorough understanding of the project’s ongoing specifications. In Airways’ experience, it is essential for controllers, manufacturers, and the ANSP to reach agreement in order to establish realistic expectations for system design from the very beginning. Hypothetically, small or remote communities, that rely primarily on aviation for transportation, may be threatened by location-specific pricing. Under this pricing scheme, an ANSP charges a fee for service that matches the cost of providing that service to a specific location. As a result, some communities may be subject to higher charges than others. By contrast, two ANSPs have used network pricing, a scheme that charges the same fee for air navigation services to every airport, regardless of size or location, even though the costs of providing the services to some airports may be greater than to others. Under network pricing, the service to heavily used airports subsidizes the service to others. Two of the ANSPs have adopted location-specific pricing for some air navigation services. (Airport services are provided by competition in the U.K., which may result in different prices.) Often, the minimum costs of service to small or remote communities are higher per plane than the costs of service to large communities because the cost of air navigation services must be spread among fewer operators, usually with smaller aircraft. If airlines decide that service to such communities is not commercially viable, they may ultimately discontinue service to these communities. Similarly, general aviation operators may be threatened if they are required to pay fees that cover the full costs of the air navigation services they receive. Continuing to serve small communities and operators may require special efforts to balance public service needs and business interests. In addition to the Remote Air Subsidy Scheme mentioned earlier, Australia also provided a subsidy that allowed prices to be capped at most general aviation and regional airports. This subsidy was designed to ease the transition to location specific pricing for select airports and is scheduled to end in June 2005. Consequently, Airservices Australia reported that, in order to compensate, it will be increasing charges over the next 5 years at these locations and that these increases have been approved by the regulator. These increases have been moderated to balance the effect on aviation at airports frequently used by general aviation operators. As a result, concerns persist about the implications of further price increases and any future need to close or reduce services at these locations. Some fear that needed air services to remote bush locations will be lost while others fear that secondary services such as flight school training will be affected. Hypothetically, the impact on small operators and remote communities is difficult to assess. Theoretically, costs may go up as a result of implementing user fees, but charges may not necessarily be prohibitive. Where service to small communities is legislatively mandated, ANSPs may ultimately be forced to take a financial loss if they are not able to fully recover their costs. Airservices Australia is seeking to control costs at some of those locations by deploying new lower-cost technologies to serve small communities. For example, Airservices Australia is planning to install Automatic Dependent Surveillance Broadcast (ADS-B) ground stations, which will allow air traffic surveillance services over remote regions of Australia where radar is not a cost-effective solution. To protect taxpayers’ interests, the countries that commercialized their air navigation services needed to have an appropriate valuation of their facilities and equipment before selling these assets to the newly established ANSP. According to the Office of the Auditor General (OAG) in Canada, Canada did not properly value its ANSP assets and infrastructures. The C$1.5 billion value that the government negotiated with NAV CANADA in 1996 fell short of the C$2.3 billion to 2.4 billion estimate developed in 1995 by a third party hired by the OAG. NAV CANADA reported, however, that both it and Transport Canada disagreed with the OAG’s estimate and its underlying assumptions. In a study of the NATS reorganization, the National Audit Office (NAO) found that the UK government had raised some ₤758 million from the sale of the ANSP to a consortium of seven UK-based airlines. However, these proceeds were realized by increasing the level of NATS’s bank debt. As a result of this debt, NATS was extremely vulnerable to the decline in air traffic after September 11. DFS is currently undergoing a valuation of its assets in preparation for selling 74.9 percent of its equity to private investors in a formal competitive bidding process. Some countries experienced difficulties in retaining a sufficient number of staff to carry out safety regulation. For example, in Canada, many of the safety staff moved to the newly established NAV CANADA after commercialization, leaving the government regulator, Transport Canada, with insufficient staff to carry out timely safety inspections during the first 6 months after commercialization. Germany faces a similar challenge as the government prepares to develop a safety regulatory authority in accordance with the Single European Sky initiative by the end of this year. According to the Transport Ministry, it may be difficult for the government to recruit safety staff on a civil service salary and compete with the salaries of safety inspectors from the private sector. Obtaining baseline measures before commercializing a country’s air navigation services will allow the government and others to assess the new ANSP’s safety, cost, and efficiency. Some of the countries whose ANSPs we reviewed did not collect baseline data or measure performance as extensively as the commercialized ANSPs have since done. As businesses, commercialized ANSPs must assess the progress they are making toward their goals to access private funding, and therefore they need extensive performance data. In addition, international organizations, such as CANSO and ICAO, support commercialized ANSPs and ICAO, for example, emphasizes the importance of having transparent financial data available for economic oversight. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or the other Members of the Subcommittee may have. For further information about this testimony, please contact me at (202) 512-2834 or dillinghamg@gao.gov. Individuals making key contributions to this testimony included Bess Eisenstadt, Samantha Goodman, Hiroshi Ishikawa, Jennifer Kim, Steve Martin, and Richard Scott. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In the past, governments worldwide owned, operated, and regulated air navigation services, viewing air traffic control as a governmental function. But as nations faced increasing financial strains, many governments decided to shift the responsibility to an independent air navigation service provider (ANSP) that operates along commercial lines. As of March 2005, 38 nations worldwide had commercialized their air navigation services, fundamentally shifting the operational and financial responsibility for providing these services from the national government to an independent commercial authority. GAO selected five ANSPs--in Australia, Canada, Germany, New Zealand, and the United Kingdom--to examine characteristics and experiences of commercialized air navigation services. These ANSPs used different ownership structures and varied in terms of their size, amount of air traffic handled, and complexity of their airspace. This testimony, which is based on ongoing work, addresses the following questions: (1) What are common characteristics of commercialized ANSPs? (2) What do available data show about how the safety, cost, and efficiency of air navigation services have changed since commercialization? (3) What are some initial observations that can be made about the commercialization of air navigation services? The five commercialized ANSPs that GAO selected for review have a number of common characteristics: Each operates as a business, making and carrying out its own strategic, operational, and financial decisions. Each generates and manages its own revenue to cover its costs, charging fees to users and borrowing funds from private markets instead of relying on annual governmental appropriations. Each has also put commercial financial and performance data systems in place. All five ANSPs have retained safety as their primary goal, and each is subject to some external safety regulation. Each ANSP is largely a monopoly provider of air navigation services and undergoes some form of economic review or follows some guidelines for setting prices. The ANSPs report that, since commercialization, each has maintained safety, controlled costs, and improved efficiency. Data from all five indicate that safety has not eroded. For example, data from New Zealand and Canada show fewer incidents involving loss of separation (the required distance between an aircraft and another object). All five ANSPs have taken steps, such as consolidating facilities, to control their operating costs. Finally, all five ANSPs have invested in new technologies that the ANSPs say have lowered their costs by increasing controllers' productivity and produced operating efficiencies, such as fewer or shorter delays. Such measures have generally resulted in lower fees for major carriers, but some smaller, formerly subsidized users now pay new or higher fees and are concerned about future costs and service. GAO's work to date suggests a number of observations about commercialized ANSPs: A contingency fund can help an ANSP cover its costs without greatly increasing user fees during an economic decline; economic regulation by an independent third party can ensure that an ANSP sets prices fairly; providing a forum for stakeholders gives attention to their needs; and special measures may be necessary to reconcile the inability of some users to pay the full costs of services at some small communities and the ANSP's need to recover its costs.
School districts differ inherently in the amount of local funding they can raise because they vary in the value of property or other wealth they are allowed to tax and in the willingness of residents to tax themselves to support education. States play the leading role in equalizing funding among school districts by providing aid that helps reduce these funding gaps. The federal government plays a more limited, indirect role by targeting federal funding to poor students and by encouraging states through the use of incentives in the Improving America’s Schools Act of 1994 to equalize funding among their school districts. The federal government’s main role in elementary and secondary education since the 1960s has been to target federal funding toward services for educationally disadvantaged children through categorical, program-specific grants. The largest single federal elementary and secondary education grant program, which began in 1965, is title I of the Elementary and Secondary Education Act. This program continues to serve educationally disadvantaged children through program-specific grants. The fiscal year 1997 appropriation for the disadvantaged was $7.3 billion. The federal role in funding elementary and secondary education has traditionally been limited, however, with state and local governments providing most funding. The federal government funds only about 7 percent of total national education funding, with states and local governments funding nearly an equal share of the remaining funding. Individual states’ share of funding, however, varies considerably. State contributions in the 1991-92 school year ranged from 8 percent of total (state and local) funding in New Hampshire to 85 percent of total funding in New Mexico. The federal government does target funds to disadvantaged and poor students. As we reported in our study of targeting to poor students,although federal dollars make up only a small part of total national funding of elementary and secondary education, the effect of adding federal funds to state funds increased the targeting of funds to poor students by 77 percent in school year 1991-92. Moreover, 64 percent of poor children attended public schools in 21 states that had significant funding gaps between poor and wealthy districts, according to our study. To the extent that poor students live in poor districts, federal funds help to reduce the effect of tax base disparities among districts. Although the number of poor students in a district tends to increase as district wealth declines, the increase is not great. States’ ability to fund education can vary considerably, depending on states’ income levels, the number of children enrolled in public school, and the number of children requiring additional services, such as special programs for disabled or poor children. States with higher income levels can afford to finance higher levels of education funding per pupil. In the 1991-92 school year, states’ average income per weighted pupil ranged from $41,385 in Utah to $160,761 in New Jersey. States’ numbers of poor students or those with disabilities that require additional educational needs vary widely. For example, the rate of student poverty ranged from about 33 percent in Mississippi to about 6 percent in New Hampshire in 1990. In addition, localities’ ability to raise funding for education varies widely. Among the nation’s almost 16,000 school districts, most receive local funds for education mainly through property taxes and, to a lesser extent, through local sales and income taxes. This reliance on the local property tax to raise revenue, coupled with large differences in local tax base wealth, accounts for relatively large funding gaps between wealthy and poor districts. Localities with low tax base wealth usually have low funding per pupil even with high tax rates; localities with high property values have high funding per pupil even with low tax rates. Since the 1970s, these funding disparities have resulted in lawsuits in more than 40 states challenging the constitutionality of the state school finance system. More than half of the state systems have been challenged in court since 1990; in almost half of these cases, states have subsequently implemented changes designed to make the finance system more equitable. In contrast to the federal commitment to funding services for educationally disadvantaged children, the federal government has played only a small part in encouraging states to develop equitable finance systems. Federal policy encouraging states to equalize their finance systems appears in two programs of the Improving America’s Schools Act of 1994, which reauthorized the Elementary and Secondary Education Act of 1965 (ESEA). Both programs use performance indicators focusing only on the size of funding gaps and not on a state’s effort to equalize funding among districts. The first program, title VIII Impact Aid, allows states that the Secretary of Education certifies as meeting an equity in education funding standard to take steps to prevent impact aid payments to local school districts from undermining state equalization efforts. This provision is intended to prevent impact aid from hindering states’ equalization efforts and duplicative compensation of school districts affected by federal activity (once by the federal government through impact aid and a second time by the state’s equalization program). The effect of the provision is to encourage states to equalize education funding. States that do not pass the equalization test may not consider impact aid payments as local revenue in determining state funding. The second program, the title I Education Finance Incentive program, has not yet been funded but would award additional federal money to states depending on the degree of fiscal effort and funding equity achieved. Supporters of this program suggest that if a state’s spending for education increases and spending disparities among a state’s districts decrease, title I funds can be more effectively allocated to provide disadvantaged children the additional resources they need. As noted earlier, in our report on targeting to poor students, 64 percent of poor students attended public schools in 21 states with significant funding gaps in school year 1991-92. The objectives of this study were to (1) determine what factors contribute most to reducing the size of funding gaps between poor and wealthy school districts, (2) identify states that substantially changed their school finance systems between school years 1991-92 and 1995-96 and determine the effects of such changes on the funding gaps between wealthy and poor districts, and (3) determine the kinds of changes needed for states to more fully address these funding gaps. To determine the factors contributing most to reducing funding gaps nationwide, we conducted state-level comparative analyses of states’ equalization efforts (the state share of funding and how this funding was targeted to poor school districts), the local tax effort of poor and wealthy districts, and the size of the income-related funding gap between poor and wealthy districts in the 1991-92 school year, the most recent year for which a national data set of districts was available. Our analyses included all states except Hawaii. Analyses of state targeting of funds, local tax efforts, and income-related funding gaps accounted for statewide differences in student need and geographic costs. Our national analysis of the factors leading to reduced funding gaps among districts used district resident income per weighted pupil to measure district ability to fund education from local resources. We did not use property wealth per pupil, the measure states use most often to determine a district’s aid allocation, because we could not devise a property value per pupil measure from the national district-level databases available. To determine the effect of finance reforms on the funding gaps between poor and wealthy districts, we studied four states that reported changing their school finance systems between school years 1991-92 and 1995-96: Oregon, Kansas, Louisiana, and Rhode Island. We chose these states because of their considerably different approaches to finance reform. State officials provided information on changes in state laws made to implement these reforms. For each of the four states, we analyzed how changes to state equalization policies and constraints on local tax effort may have affected both the relative tax effort of poor and wealthy districts and the size of funding gaps from school years 1991-92 to 1995-96. To calculate district wealth, we largely relied on the definition of a district’s tax base provided by state education officials. For Oregon, Kansas, and Rhode Island, we calculated district tax base using property wealth. For Louisiana, we calculated the tax base using a combination of district property wealth and sales tax revenues. See appendix III for a detailed discussion on property wealth measures in these states. In addition, we met with several state and local officials to gain a better understanding of the policies that led to changes in equalization effort. A complete list of the officials we interviewed appears in appendix V. To determine the changes in state funding and tax base targeting policies that would be needed to close the income-related funding gaps between poor and wealthy districts, we used a mathematical model that relates state equalization effort and local tax policies to the size of the funding gaps. Our analysis estimates the amount that a state’s share of total funding or targeting effort would have to increase to completely eliminate rather than just reduce funding gaps among districts. We conducted this analysis under alternative assumptions—assuming districts maintained school year 1991-92 tax effort or assuming districts all made the same effort—of how states could constrain local tax policy if they were willing to do so. Appendix IV provides details of the mathematical model used for this analysis. This report used two data sources. For the national state-level analyses, we used a database we developed for a previous report that was compiled from the Department of Education’s Common Core of Data (CCD) for the 1991-92 school year. We obtained data for per capita income and population from the 1990 census because the CCD did not have this information. To analyze the change in the funding gap in the four states we studied, we obtained school years 1991-92 and 1995-96 district data on state and local funding, tax base wealth, and demographic information directly from each state’s department of education or state legislative officials. We conducted our work between November 1996 and May 1998 in accordance with generally accepted government auditing standards. Two key factors help reduce states’ funding gaps between poor and wealthy districts: (1) the extent to which a state’s poor districts make a greater tax effort than its wealthy districts and (2) a state’s effort to reduce funding gaps through its equalization policies. Poor districts may make a greater tax effort than wealthy districts in part because residents choose to do so or because state and local policies directly or indirectly lead to an extraordinary tax effort in poor districts. Many states try to lessen the disparities between poor and wealthy districts’ tax bases through their equalization policies. Such policies include reducing the reliance on local funding by increasing the overall state share of total funding or targeting state funds to favor poor districts. Of the two key factors affecting funding gaps, poor districts’ extra tax effort was the more important factor in explaining the size of these gaps in school year 1991-92. The most equalized school finance system would enable districts’ per pupil funding to be 100 percent of the state’s average per pupil funding for an equal tax effort in all districts. We determined the equalization effort of 49 states in school year 1991-92. The average state equalization effort was 62 percent, according to our analysis, suggesting that states could have more impact on the funding gap if they were to strengthen their equalization policies. Poor districts in most states were making a greater tax effort than wealthy districts. Funding gaps exist mainly because wealthy districts can raise more local revenue than poor districts. Poor districts could reduce or even eliminate the funding gaps, however, if they made an extraordinarily high tax effort compared with wealthy districts’ efforts. Differences in poor and wealthy districts’ tax efforts reflect the varying tax choices of district residents and the tax regulations governing those choices. In school year 1991-92, the tax effort of poor districts in most states exceeded that of wealthy districts and contributed to reducing the funding gap. Differences in poor and wealthy districts’ tax efforts result from district residents making tax choices that may be affected by their local and state tax policies. In many states, local taxing authorities, such as school district boards, set local tax policy. For example, such authorities may decide autonomously or with voter approval when and how much to raise local property taxes for education. When these authorities seek voter approval, district residents may choose taxes for education by voting for or against property tax rate increases tied to general levies or specific levies such as initiatives to improve school technology. States also make policies affecting local taxes. Since the 1970s, states have increased their direct control of districts’ tax efforts. For example, some states mandate a certain tax rate or impose a minimum or maximum tax rate on districts to ensure that districts contribute a certain share toward their students’ education. States concerned about disparities in the funding levels between poor and wealthy districts may influence these tax efforts by financially rewarding less wealthy districts that increase their tax effort or, more rarely, by recapturing some local funding from wealthy districts whose local tax effort raises too much revenue. Our 49-state analysis shows that poor districts in most states made a greater tax effort than wealthy districts, which contributed to reducing funding gaps. In the 1991-92 school year, the poorest districts in 35 states made a greater tax effort than the wealthiest districts. States whose poorest districts had a greater tax effort compared with the wealthiest districts’ had smaller funding gaps (see table 2.1). Alaska, California, and Iowa are examples of such states. States whose poor districts’ relative tax efforts were less than wealthier districts, for example, Georgia and Maryland, had much greater funding gaps. To offset the disparities in district funding levels, many states use equalization policies aimed at reducing funding gaps. Equalization policies have two parts: the state share of total funding and the state effort to target poor districts. Of these two, state share has a larger impact on state equalization policies. In effect, equalization policies determine the extent to which a state enables its districts to provide the state average funding level when all districts make an equal tax effort. Specifically, a state’s equalization effort measures the portion of the state’s average funding per pupil that state aid would enable all districts to finance with an equal tax effort. States can apply an infinite combination of state shares and targeting policies to achieve a certain level of equalization effort. Although the average state equalization effort was only 62 percent of the maximum possible effort in school year 1991-92, state equalization efforts overall still helped reduce the funding gaps. The state share of total funding and the state targeting effort determine a state’s equalization effort. Increasing the state share of total funding reduces the relative amount of the state’s total education funding that depends on district wealth. Holding state share steady but targeting more state funds to poor districts than to wealthy districts offsets the relative disparities in districts’ ability to raise revenues. State targeting efforts imply that some wealthy districts may receive no state aid or may remit a certain share of their locally raised revenues to the state, a transaction termed the “recapture” of funds. As seen in table 2.2, state share has an impact on equalization efforts. According to our analysis, a relatively high state share always produced an above average equalization effort. Even when a state’s targeting effort was low, high state shares still resulted in an above average equalization effort. In contrast, states with low state funding shares generally had targeting policies that substantially favored poor districts. For example, only two of the eight states with low state shares also had low targeting efforts (Oregon and South Dakota). None of the eight states had a targeting effort large enough to produce an above average state equalization effort. Although state share has more impact on closing funding gaps than targeting effort, states have some flexibility in applying these two means to achieve a certain equalization effort. According to our analysis, states could have achieved the same equalization effort in school year 1991-92 with different combinations of state share and targeting. Table 2.3 shows four states that achieved an equalization effort of 76 percent and four others that achieved an effort of 54 percent, each with different combinations of state funding shares and targeting efforts. For example, Colorado and Alaska both achieved an equalization effort of 76 percent— Colorado with a high targeting effort and a relatively low state share of total funding and Alaska with a high state share and no targeting effort. In general, the greater a state’s share of total funding, the less a state has to target to poor districts to reach a certain equalization effort. Likewise, the greater a state’s targeting effort, the less its share of total education funding needs to be. (See table IV.5 in app. IV for the range of combinations.) Although states could achieve a 100-percent equalization effort with sufficient state funding share and targeting efforts, only Nevada made the maximum effort given the total funding available in the state in school year 1991-92. The average state equalization effort in school year 1991-92 would enable districts to finance 62 percent of the average funding level assuming all districts were making an equal tax effort. Other states’ equalization efforts in school year 1991-92 ranged from 87 percent (Arkansas and Kentucky) to about 13 percent (New Hampshire). States making a greater effort in school year 1991-92 had smaller funding gaps. Table 2.4 shows the size of state funding gaps relative to states’ equalization efforts for 21 states that had about the same relative local tax effort. In general, the larger the equalization effort in these states in school year 1991-92, the smaller the funding gaps between poor and wealthy districts. For example, West Virginia had a large equalization effort, resulting in a small funding gap between its wealthy and poor districts. More specifically, the poorest districts in West Virginia had $4,859 per weighted pupil; the wealthiest had $5,044, a difference of only 4 percent. In contrast, Illinois had a small equalization effort, which was associated with a large funding gap. The poorest districts in Illinois had $4,330 per weighted pupil; the wealthiest had $7,249, a difference of 67 percent. Although state equalization effort has an important effect on reducing the funding gap between poor and wealthy districts, districts’ relative tax effort was more important in closing the funding gaps in 1991-92. Nationwide, equalization effort and relative local tax effort accounted for about 63 percent of the variation in the funding gap. In 35 states, poor districts made a greater tax effort than wealthy districts. Nine states in school year 1991-92 with funding gap scores that were not statistically different from zero exemplify the importance of this tax effort (see table 2.5). In these states, the tax effort of the poorest districts was greater than that of the wealthiest districts. Poor districts’ extra effort ranged from 106 percent as much as wealthy districts’ in Delaware to over four times as much in Wyoming. Poor districts’ extra effort was particularly important in the three states—Iowa, Kansas, and Wyoming—that closed their funding gaps with an equalization effort that was less than the national average (62 percent). On the basis of 1991-92 data, poor districts’ extra tax efforts had more impact on closing funding gaps between poor and wealthy districts than state equalization efforts. The average state equalization effort in school year 1991-92 was 62 percent, however, suggesting that states could have more impact on the funding gap if they were to strengthen their equalization policies. Among the nine states with no significant funding gap, the poor districts’ greater tax effort substantially contributed to closing this gap in at least three of these states. This suggests that in developing strategies to further reduce funding gaps, policymakers may want to consider policies regulating local tax effort in combination with equalization policies. Steps taken to equalize funding among poor and wealthy school districts in the four states we reviewed—Oregon, Kansas, Rhode Island, and Louisiana—produced mixed results. According to state officials, each state made changes designed to increase the amount of state aid to poor districts to close the funding gap between poor and wealthy districts. However, only two of the states—Oregon and Kansas—narrowed the funding gap mainly because they significantly increased their equalization effort and constrained local tax effort. Louisiana’s funding gap widened, and Rhode Island’s funding gap stayed almost the same because increased equalization efforts were comparatively small and more than offset by changes in the respective school districts’ tax efforts. These states’ experiences, however, illustrate how both state equalization efforts and policies affecting the tax efforts of poor and wealthy districts can play an important role in reducing the funding gap. We chose Oregon, Kansas, Rhode Island, and Louisiana to study because they used a wide array of strategies for changing their finance systems, these changes took place between school years 1991-92 and 1995-96, and state officials thought the changes would improve student equity. Beyond improving student equity, the forces driving reform in each state varied and included citizens’ demands for property tax relief, state budgetary crises, and court pressure. Throughout the 1980s, two recurring problems affected Oregon’s school finances: (1) a crisis in some districts’ ability to fund schools because voters repeatedly rejected operations levies and (2) frequent attempts by antitax activists to reduce property taxes, the main source of local funding for the state’s public schools. To address the school funding crisis, the Oregon state legislature suspended the state funding formula in 1989, and the state began allocating future funding (through school year 1991-92) at the 1989 level plus an increasing percentage factor, according to a state official. Oregon took these actions after a blue ribbon panel commissioned by the legislature recommended that the state scrap the existing finance system and create one less reliant on local property taxes. In 1990, however, before the legislature could develop a new funding formula, Oregon voters adopted a constitutional amendment placing a ceiling on the property tax rate that could be assessed for school operations and requiring the state to replace any lost local education revenues with state funds. This forced the legislature to develop a school finance formula driven mainly by state funds. The new tax rates were phased in between school years 1991-92 and 1995-96; steps to implement the new funding formula began in school year 1992-93 and, according to a state official, are scheduled to be completed by 2001. An October 1991 pretrial court ruling was the main reason for changes to Kansas’ school finance system. The legal challenge from four consolidated lawsuits filed by school districts and citizens claimed, among other things, that large disparities in both local property tax rates and in spending per pupil violated the state constitution. The district court judge met with state government and education leaders and presented his interpretation of the state’s responsibility for educating all of its children. He emphasized that the state has a duty to develop a rational finance system that recognizes disparities in spending based on legitimate student and district characteristics. He suggested that the pending trial could be avoided if the finance system was changed in the 1992 legislative session. The legislature accepted the judge’s challenge and developed a new finance system that was implemented in school year 1992-93. In 1990, a crisis in Rhode Island’s savings and loan institutions and credit unions forced the state to use state funds to bail out these entities, state education officials said. According to these officials, diverting state funds to address this crisis forced the legislature to cut the state budget, including funding to elementary and secondary education. These cuts dramatically reduced the state share of education funding from 52 percent in 1991 to about 38 percent in 1992. The cuts in state funding hit hardest in poorer districts that could not offset the lost state funding with increased local revenue, resulting in a reduction in district revenue, according to the officials. In contrast, they said, wealthier districts could protect their spending levels because they could fully offset losses in state funding by increasing local revenue. Recognizing the growing inequities, the legislature began implementing changes to the finance system in 1992. A crisis in the oil industry in the 1980s, which dramatically reduced state tax revenue, forced the Louisiana state legislature to reduce the state share of funding for its public schools, state education officials said. The impact of the cuts on state funding highlighted the inequities in the state funding formula, which allocated state funds on the basis of teacher and staff costs and made little or no adjustment for differences in districts’ abilities to raise revenue or for student need, they said. In response, Louisiana voters passed a constitutional amendment mandating the equitable allocation of education funds and transferring control of the state funding formula from the state legislature to the state Board of Elementary and Secondary Education (BESE). In 1988, BESE began revising the funding formula to improve student equity. The legislature approved the new funding formula in 1992, and the state began implementing it in the 1992-93 school year; it is scheduled to be completed by the 1999-2000 school year. In revising their school finance systems, all four states increased their equalization effort and made changes affecting the local tax effort of their school districts. As table 3.1 shows, Oregon and Kansas each substantially increased their equalization effort; Rhode Island and Louisiana more modestly increased their effort. The large increases in Oregon’s and Kansas’ equalization efforts can be explained by the large increases in their state shares of total funding and, in Kansas, by an increase in its targeting of state funds to poorer districts. The increase in Rhode Island’s equalization effort reflects the relatively small increase in the state share of education funding. The increase in Louisiana was due to the state’s effort to target more state funds to poor school districts. Regarding changes affecting the local tax effort, Oregon and Kansas constrained districts’ tax efforts. In addition, Rhode Island and Louisiana made changes that affected incentives for increasing districts’ tax efforts.Rhode Island suspended the funding program that had encouraged districts to increase their education spending. In contrast, Louisiana introduced a state aid matching program for districts willing to exceed a minimum tax rate. Table 3.1 shows each state’s relative change in state school finance measures as well as actions affecting local tax efforts. The actual values for each state’s equalization effort, state targeting effort, state share, relative local tax effort, and funding gaps for school years 1991-92 and 1995-96 appear in appendix III. State legislatures often change their school finance systems to improve student equity. In most cases, some wealthier districts must give up some of their advantage to improve the funding levels of poorer districts. Even so, a state may not reach an acceptable level of student equity if changes in local tax choices offset the state’s equalization efforts. Two states, Oregon and Kansas, narrowed the funding gap mainly by increasing the state share of education funding and limiting districts’ ability to raise local revenue. In Rhode Island and Louisiana, changes in school district tax efforts undermined the effects of moderate state equalization efforts. Figure 3.1 summarizes the changes in the size of the funding gaps between wealthy and poor districts in the four states we reviewed. In 1990, Oregon’s voters approved an initiative that set a statewide maximum levy rate. This rate significantly reduced local tax effort and forced the legislature, which was creating a new funding formula, to adopt a formula funded largely by state revenue rather than local revenue. Before implementing the new formula, the education funding for Oregon’s school districts was primarily based on districts’ property wealth and voters’ willingness to approve funding levies, resulting in a large variation in spending levels by district. To make up for the loss of local revenue, the state sharply increased its share of funding from 33 to 59 percent between school years 1991-92 and 1995-96. The new state funding formula included a new base funding level per student and allowed for adjustments to the base to account for (1) student needs, such as those for special education, poverty, and English as a second language, and (2) district needs, such as transportation costs and teacher costs based on teacher experience. The state share of funding for an individual district equaled the base funding level adjusted for student and district needs less the revenue the district could raise locally at the mandatory tax rate. Initially, under the new state finance system, total revenue in the wealthiest districts would have decreased significantly; revenue in the poorest districts would have greatly increased. Concerned about the impact of these funding changes, the legislature decided to phase in the new formula, limiting the effect of the change on wealthy districts, while slowly increasing funding to the poorest districts. Despite the phased-in approach, the changes in Oregon’s finance system narrowed the funding gap between the wealthiest and poorest districts from 0.23 to 0.15 as shown in figure 3.1. For the poorest districts, total funding increased by $805 per weighted pupil; for the wealthiest districts, it increased by $586 (see table 3.2). Oregon succeeded in reducing its funding gap because it increased its equalization effort by increasing its state share of education funding more than enough to offset the modest decline in its effort to target more funds to poorer districts. The voter-driven initiative had the effect of reducing the tax efforts of both poor and wealthy districts proportionately. Thus, almost no change occurred in the relative tax effort of both poor and wealthy districts, ensuring that the state’s increased equalization effort would reduce the funding gap. Under the new finance system, although all Oregon districts received more state aid, a smaller share of the increased state aid was targeted to poor districts. The state decided to constrain the implementation of its new funding formula by gradually increasing state aid to its poorest districts to avoid reductions in total funding in wealthier districts. Between school years 1991-92 and 1995-96, about 66 percent of the $1,717 increase in state funding per weighted pupil was needed to replace the wealthiest districts’ loss of $1,132 per weighted pupil in local funding. In the poorest districts, most of the increased state aid was new funding rather than a replacement for lost local funding. Only 46 percent of the $1,494 increase in state funding per weighted pupil was needed to cover the $688 loss in local funding. In 1992, the Kansas legislature, hoping to avoid a trial of the constitutionality of the state finance system, made changes that increased the state’s role in determining school districts’ funding levels. To address both student and taxpayer equity concerns, the state increased its share of funding from 42 to 59 percent (between school years 1991-92 and 1995-96), targeted more funding to poor districts, and imposed a uniform tax rate on all districts, giving most districts property tax relief, while raising tax rates for some of the wealthiest districts. In the process, the state dramatically revised its school finance system. Beginning in the 1992-93 school year, the state (1) set a base budget for each district based on student and district needs such as vocational and bilingual education and enrollment size; (2) funded the difference between a district’s base budget amount and what the district could raise locally under the uniform statewide property tax rate; (3) required districts that raised revenues above the base budget, at the uniform tax rate, to remit the excess revenue to the state for distribution as state aid to less wealthy districts; and (4) provided districts the option of raising additional funds—up to 25 percent above the base budget—with an increase in the property tax rate subject to voter approval. The state provided supplemental funding for some districts that raised the additional revenue—the poorer the district the higher the state funding. This funding was intended to give high-spending districts the opportunity to maintain their spending levels. Districts are not eligible for supplemental state funding if their assessed valuation per pupil is at or above the 75th percentile of assessed valuations for all districts in the state. Overall, the changes in Kansas’ finance system narrowed the funding gap between the wealthiest and poorest districts from 0.10 to 0.08, as shown in figure 3.1. For the poorest districts, total funding increased $1,124 per pupil; for the wealthiest districts, it increased $1,111 (see table 3.3). Kansas succeeded in reducing its funding gap because it increased its equalization effort by significantly increasing both the state share of funding and its effort to target more funding to poor districts. The state also imposed a uniform tax rate on all districts that had the effect of decreasing the poorest districts’ tax effort and increasing that of the wealthiest. Although this change in tax effort would normally widen the funding gap between poor and wealthy districts, this was prevented in part because the wealthy districts were required to remit their excess local revenue for distribution as state aid to less wealthy districts. In addition, even though the state gave districts the choice of raising their property tax rates enough to increase their spending levels up to 25 percent above the base budget, limiting this additional spending allowed the state to maintain control over district spending levels. More than half of the 304 districts chose to increase their spending levels above the base budget in school year 1995-96. All these changes led to nearly every district receiving additional state funding. As table 3.3 shows, between school years 1991-92 and 1995-96, the poorest districts received proportionately more state aid than the wealthiest districts. The poorest received an additional $1,312 per weighted pupil in state aid under the new system, an increase of about 53 percent. In contrast, the wealthiest districts received an additional $597 per weighted pupil in state aid, an increase of only about 32 percent. The wealthiest group included 10 districts that received no state aid in school year 1995-96 and instead had to remit about $34 million in excess local revenue to the state. Had these 10 districts kept the excess revenue, the funding gap between wealthy and poor districts would have widened, not narrowed, according to our analysis. The state’s imposing the uniform property tax rate, in addition to improving student equity, more equally distributed tax burdens by district. As table 3.3 shows, the tax effort of the poorest districts dropped by $20.41 per pupil (a 21-percent decrease); the tax effort of the wealthiest districts increased by $4.45 per pupil (an 8-percent increase). Nevertheless, the poorest districts still had a higher tax effort than the wealthiest. This indicates that even with more state aid and a reduced tax effort, Kansas’ poorest districts were still making a greater tax effort than the wealthiest districts. Before 1995, Rhode Island’s operations aid program allocated a given percentage of a district’s total expenditures to each school district. To help equalize total funding, poorer districts received a higher state funding percentage than wealthier districts, although all districts were guaranteed some percentage of their total expenditures until 1994. This provided a greater incentive for poor districts to increase their funding compared with wealthier districts. With the sharp drop in state education funding in school year 1991-92, however, the state reduced the amount of district expenditures it financed. This decline in state aid forced the districts to try to replace the lost funds with local revenue raised from property taxes. Although the wealthier districts could generally replace the lost state aid, some of the poorer districts met taxpayer resistance, according to state officials. Recognizing that the funding gap between the poor and wealthy districts was growing, the legislature took steps to address the system’s inequities. The state (1) stopped using its equalization formula to distribute funding in school year 1995-96, and, as a result, poor and wealthy districts alike no longer had an incentive to increase their education expenditures and in turn their local tax effort; (2) implemented several new categorical funding programs targeted specifically to poor communities; and (3) slightly increased the state share of funding from 40 percent in school year 1991-92 to 42 percent in school year 1995-96. Despite state efforts to address the inequities, the changes to Rhode Island’s finance system had almost no effect on the funding gap between wealthy and poor districts, which changed from 0.19 to 0.20, as shown in figure 3.1. For the poorest districts, total funding increased by $911 per weighted pupil; funding to the wealthiest districts increased by $1,040 (see table 3.4). Despite an increase in equalization effort by increasing state share, between school years 1991-92 and 1995-96, Rhode Island could not narrow the funding gap in part because of the poorest districts’ response—a large decrease in local tax effort—to changes in the state aid program and the difference in the growth of districts’ tax bases. Because of the restructuring of its school finance system, state aid increased in the poorest districts by an average $1,152 per weighted pupil; state aid to the wealthiest districts decreased by $57. Although most of Rhode Island’s school districts, poor and wealthy alike, reduced their local tax effort, the poorest districts’ decrease was much larger than the wealthiests’. The large increase of $1,152 per pupil (41 percent) in state aid may have prompted the poorest districts to reduce their local tax effort by $2.82 per pupil (a 17-percent decrease), resulting in a decrease in local revenue of $240 per weighted pupil. Although the wealthiest districts slightly decreased their tax effort by $0.19 per pupil (a 1-percent decrease), they also had large increases in property values (24 percent compared with 7.5 percent for the poorest). The resulting increase of $1,097 per weighted pupil in local funding was more than enough to offset the decline in state aid. Therefore, the funding gap changed little. The ability of Rhode Island’s districts to change their local tax effort in response to changes in state aid undermined state efforts to close the funding gap. Before the 1992-93 school year, Louisiana allocated state funding to its school districts mainly on the basis of teacher and staff costs associated with district enrollment size. The state made little or no adjustment for differences in a district’s ability to raise local revenue or for student need-related cost differences. As a result, some affluent districts received more state funding than poorer districts because they had higher teacher costs, according to a state official. The main source of local revenue for districts was the sales tax. Property tax revenue was limited because of a homestead exemption and an industrial exemption, which limited tax revenue from certain companies. Affluent districts often generated more local revenues with lower tax rates than poorer districts because they had higher levels of sales or property tax bases, according to officials. When the oil crisis forced reduced state funding for education, it highlighted the unfairness of the state’s finance system. This awareness led to a voter-approved constitutional amendment that required BESE to recommend a more equitable education funding formula. Thus, in 1992, BESE proposed and the state legislature approved a new funding formula to target more funding to less wealthy districts. The state also changed how it measured district wealth by using an adaptation of the representative tax system. This system calculates each district’s ability to raise revenue for education by estimating the combined total sales and property tax revenue a district could raise at the state average sales and property tax rates. The new funding formula is a two-tiered formula. The first tier provides each district a basic funding level with additional funding provided for the increased costs of educating students such as those who are at risk or need remedial or special education. The state share of a district’s basic funding level is the difference between the basic level and the amount the district could raise if it were to apply the recommended tax rate. The amount of local revenue the state calculates as a district’s ability to pay is only for determining the state allocation, however; districts are not required to raise the local revenue. To raise the overall funding level for education, the state established a second tier to provide an incentive for districts to raise local revenues beyond the amount required by the funding formula’s first tier with a potential state match of up to 40 percent. The amount of additional funding a district receives is based on its wealth—poorer districts receive more than wealthier districts. Despite changes to Louisiana’s finance system, the funding gap between the wealthiest and poorest districts slightly increased from 0.24 to 0.26, as shown in figure 3.1. More specifically, Louisiana’s poorest districts’ total funding increased by $503 per weighted pupil; funding to the wealthiest districts increased by $724 per weighted pupil as shown in table 3.5. Between school years 1991-92 and 1995-96, Louisiana’s funding gap slightly increased despite the state’s increased equalization effort because wealthy districts increased their tax effort and poor districts decreased their tax effort, leading to changes in local revenue that undermined the effects of the state’s modest equalization effort. Under the new system, state aid increased to the poorest districts an average of $405 per weighted pupil; state aid to the wealthiest districts declined by $92. This increase in targeting effort would normally be expected to narrow funding gaps among districts, but in Louisiana it did not. With the implementation of the new funding formula, the wealthiest districts increased their local tax effort by $0.05 per pupil (a 6-percent increase). This increase in tax effort coupled with a 35-percent increase in tax base helped to increase local revenue by $816 per weighted pupil and served to more than offset the loss in state aid. Although the amount of local revenue raised by the poorest districts increased by $99 per weighted pupil, the increase reflects a 32-percent increase in their tax base and not their tax effort, which fell by $0.18 per pupil (a 16-percent decrease). The poor districts’ tax effort declined despite state financial incentives to increase it, although it remained higher than that of the wealthiest districts. Achieving student equity among a state’s school districts is difficult. Legal challenges, state budget concerns, or the state’s voters generally drive changes to a state’s elementary and secondary education funding policies. In most states, however, education represents a large share of a state’s overall expenditures, and decisions are made in a political environment that generally requires compromise. Even in states that successfully negotiate compromises among several competing interests—students, taxpayers, and advocates for local control of education—the envisioned levels of funding equity among school districts may not be reached. The tools that states use to equalize district tax bases—increased state share of total education costs, increased targeting of state funds to poor districts, or both—may not be enough unless the state is willing to adopt policies that control local tax effort. In the states we reviewed, Oregon and Kansas closed the funding gap because, in addition to their strong equalization efforts, they took steps to control the tax effort of districts, as shown in table 3.6. On the other hand, efforts to close the funding gap in Rhode Island and Louisiana did not succeed because their equalization efforts, though positive, were modest and their poorer districts provided tax relief in response to the increased targeting of state aid to poorer school districts. Both states and the federal government can play a role in reducing or even eliminating funding gaps between poor and wealthy districts. At the state level, three tools can help reduce funding gaps: increasing the state’s share of total funding so that differences in local funding will have proportionately less effect on overall per pupil spending, increasing state-level efforts to target funds specifically to poor districts, and constraining district tax behavior. Deciding what combination of these three tools should be used depends on the equity outcomes that a state wants to accomplish for students and taxpayers. In our cost analysis of alternatives to completely eliminate state funding gaps in school year 1991-92, we found that the policy changes states would have to effect can be substantial. Overall, state efforts to eliminate their funding gaps while requiring districts to maintain their existing tax effort would require the median state share of funding to increase from about 50 to 71 percent—assuming no change in the state’s targeting effort. Alternatively, if states were to rely solely on their targeting effort without increasing their state share, a more than 200-percent increase in the median state effort to target funds to poor districts would need to occur. Such an increase would mean that some states would have to require wealthy districts to forego state aid altogether and possibly even contribute some of their local revenues to benefit poorer districts. At the federal level, two provisions in the Improving America’s Schools Act of 1994 encourage states to equalize funding among school districts. Both provisions focus on funding outcomes only—rewarding states for achieving a specific degree of student funding equity. Neither provision considers the extent to which taxpayers in poorer school districts may have contributed to this outcome by making a greater local tax effort than taxpayers in wealthier districts. State options for reducing funding gaps involve using policy tools governing state equalization efforts and local taxing behavior. Which policy tools a state may choose to implement depends upon the outcomes it wants to achieve. States have three tools by which to reduce funding gaps. The first two involve state equalization policies: increasing the state share of funding and increasing state targeting. Most states would probably find it easier to use a combination of these two tools rather than rely on one exclusively. To close the funding gap, however, a state may also need to use the third tool: constraining local tax behavior. A state may use this tool in three ways: (1) holding district tax efforts at current levels, (2) setting an equal local tax effort, or (3) setting a required minimum level of tax effort. A state’s equalization effort can provide more funding to poorer districts in two ways: by increasing the state share of total funding so that differences in local funding will have proportionately less effect on overall student expenditures and by increasing its targeting effort; that is, a state can adjust its approach so that aid goes more exclusively to poor districts. Some states already extensively target their aid to poor districts, while other states do not. At its most extreme, this redistribution could require the state to recapture local funding raised above a maximum amount and redistribute that funding to other, poorer school districts. Although some states may be able to choose between increasing their share of total funding or increasing their targeting effort, many states would probably need to increase both to reduce the financial impact on the state budget and on wealthier districts. The more a state can afford to increase its education spending, the less it would have to redistribute state funding—and possibly local funding—from wealthier to poorer school districts to reduce funding gaps. Regardless of the method used, increasing a state’s equalization effort automatically improves equity for taxpayers because it allows poor districts with a high tax effort to finance the state average funding level per pupil with less of a tax effort. Because local funding raised mainly through property taxes accounts for half of the nonfederal revenue that funds education, imposing tax constraints on localities may be necessary to close the funding gap among districts. Consequently, in pursuing student funding equity, states may have to confront taxpayer equity issues. Tax constraints may be necessary because unconstrained local reaction to changes in state equalization aid can undermine the state’s intent to improve student funding. For example, poor districts receiving additional state aid may use it for tax relief rather than for closing funding gaps. Similarly, wealthy districts receiving little or no state aid may raise local taxes, perpetuating the gaps. This kind of fiscal substitution has occurred, according to our research (see ch. 3). Therefore, ensuring that equalization efforts can reduce funding gaps will probably require states to constrain local tax effort to some degree. Tax constraints pose taxpayer equity issues. Such constraints may require districts to maintain existing tax efforts or to put forth a specified equal level of effort or a minimum level of effort. Assuming that the pattern of district tax efforts in school year 1991-92 still holds true, constraints that maintain the current tax effort would leave poor districts in most states making a greater tax effort than wealthy districts but still unable to raise as much funding as wealthy districts because of their less valuable tax base. Taxpayers may view constraints that require an equal or minimum level of tax effort beneficial, but the constraints alone—without increasing the state’s equalization efforts—would not guarantee that districts would receive equal money for an equal effort. The policy tools a state ultimately chooses to implement depend on the outcomes a state wants to achieve. States have four possible options to consider in reducing funding gaps, according to our research. Table 4.1 shows the impact of each option on different policy goals affecting students and taxpayers. These policy goals are reducing funding gaps, equalizing local tax effort, improving the amount of total revenue a district’s taxpayers can expect to obtain with an equal tax effort, and allowing freedom of local tax choice. These policy options assume that a state would increase its equalization effort by increasing its share of total education funding, increasing its effort to target funding to poor districts, or increasing both. Only the first option would require no tax constraints. In the four policy options shown in table 4.1, the state’s decision on controlling local districts’ taxing effort differs. The decision to control local tax behavior and the type of constraint used have different implications for school funding and taxpayers. The advantages and limitations of states’ using the various options appear in table 4.2. The policy options and their permutations for reducing the funding gaps and equalizing tax efforts involve varying costs to the state. In general, reducing the funding gap alone would cost the state less than any effort that also equalizes tax efforts among districts. The cost would be less because the state would rely on districts with high tax efforts to continue closing part of the gap on their own. A state using this approach would need to provide only enough money to raise funding in poor districts to a level comparable with funding in wealthier districts. If a state chose to both reduce funding gaps and equalize districts’ tax effort, its cost would tend to be higher. For most states, the funding gaps are so great that reducing or eliminating the gaps entirely would require substantially greater state funding, targeting, or both. Illustrating the financial implications of reducing funding gaps is difficult because the requisite decisions involve judgments about (1) the extent to which states want to close the gaps, (2) whether states want to address differences in tax efforts as well as funding gaps, and (3) what combination of tools they choose to employ. Because the number of possible combinations of these factors is nearly endless, we cannot address the consequences of every potential combination. To give a sense of the range of possibilities, however, we analyzed alternatives for eliminating the funding gaps under two scenarios: first, by allowing districts to maintain their school year 1991-92 tax effort, and, second, by requiring an equal tax effort for all districts. For each scenario, we assumed each state’s aim would be to eliminate funding gaps entirely either by relying solely on increases in the state share of funding or by relying solely on increases in tax base targeting. Relying solely on increases in the targeting effort to eliminate funding gaps in some states might require recapturing some funds raised locally by wealthy districts and redistributing these funds to poor districts. The national median state share of total (state and local) funding for elementary and secondary education was 48 percent in school year 1991-92. The median targeting effort was 23 percent. If states were to eliminate the funding gap while holding district tax efforts at their 1991-92 levels, the median state share of funding would need to increase to 71 percent or the median targeting effort would need to increase to 73.4 percent. Eliminating the funding gap while equalizing tax effort raised these percentages to 81 and 108 percent, respectively. In school year 1991-92, only four states provided more than a 71-percent share of total (state and local) funding, and only two states had a targeting effort above 73 percent. National averages provide some indication of the overall effort needed to eliminate funding gaps, but they obscure the significant variation at the state level. Although substantial increases in state funding or targeting effort would be needed to fully eliminate funding gaps nationwide, a few states could do so with far less drastic changes than others. For example, Colorado and Illinois vary considerably in the size of their funding gaps, the share of total (state and local) funding they provide, and the extent of their targeting effort: In school year 1991-92, Colorado’s wealthiest districts had just 8 percent more funding per weighted pupil than its poorest districts. Colorado provided 44 percent of the total (state and local) funding for education, and its targeting effort in providing this funding was 75 percent.In school year 1991-92, Illinois’s wealthiest districts had 67 percent more funding per weighted pupil than its poorest districts. Illinois provided 33 percent of the funding for education, and its targeting effort was 23 percent. These two states would face markedly different degrees of change in equalizing their funding levels among districts (see table 4.3). If Illinois did not increase its targeting effort to further redistribute state and local funding from wealthy to poor districts, then it would have to increase its share of funding substantially. It would have to raise its state share from 33 percent to at least 78 or 81 percent, depending on whether it wanted just to close gaps or to equalize tax effort as well. In contrast, Colorado would have to increase its state share of funding from 44 to at least 45 or 57 percent. Similarly, if the two states chose not to increase the state share of education funding, then the change in targeting effort required to eliminate the funding gap would also be significantly higher in Illinois than in Colorado. These differences typify the wide variation among states. Figure 4.2 shows each state’s share of total funding and targeting effort in school year 1991-92 and the change necessary to eliminate funding gaps assuming an equal tax effort. The curved line running laterally through the figure indicates the various combinations of state share and targeting effort that would produce an equalization effort of 100 percent. If a state achieves 100-percent equalization, it means that the state’s school finance system enables all districts to finance 100 percent of the state average funding level per pupil with an equal tax effort. To eliminate the funding gap in Florida, for example, the state could choose to increase its share of total funding from 53 to about 62 percent or increase its targeting effort from 62 to about 89 percent. For other states, such as Illinois, Nebraska, and Massachusetts, the changes needed to both state share or targeting effort would be much more substantial. At the federal level, two programs in the Improving America’s Schools Act of 1994 have incentives that encourage states to equalize funding levels among districts. Both programs measure only the extent to which education funding is equalized. Neither program considers the extent to which a state’s equalization effort–-rather than the extraordinary tax effort of poor districts—contributes to reducing funding gaps among districts. In school year 1991-92, the Department of Education certified that four states—Alaska, Arizona, Michigan, and New Mexico—had equalized their finance systems. With the data we now have available, we found that two of these states (Arizona and Michigan) had equalization efforts that were less than the national average of 62 percent. More importantly, the poor districts in three of the four states were making a greater tax effort than the wealthy districts, using the additional local funding raised to narrow even further or eliminate the funding gaps. (See table 4.4.) For many states, the main method for reducing or eliminating funding gaps will probably be an increase in the state share of total education funding, an increase in the state effort to target funding more specifically to poor districts, or an increase in both. The changes required would tend to be even greater if a state also sought to equalize tax effort among districts to alleviate poorer districts’ making an extraordinary tax effort to raise the state average funding level per pupil. Even the most substantial state effort to improve funding equalization, however, may not reduce funding gaps unless it is accompanied by some constraints on local tax behavior. Where poor districts with a high tax effort use new state aid partly for tax relief and where wealthy districts replace reductions in state aid with increased local revenue, funding gaps may remain and in some cases even grow. Although the federal government has two policy tools that might further encourage greater funding equity, both reward states for funding outcomes that achieve a certain degree of equalization without considering the extent to which these outcomes may result from extraordinary local tax efforts in poor districts. Reducing or eliminating funding gaps between poor and wealthy school districts presents states and the federal government with difficult policy decisions. For states, the first difficult decision is who will bear most of the costs of reducing these funding gaps: the state government or wealthier school districts. The states’ second decision involves whether their effort—which may be substantial—should be accompanied by constraints on local tax behavior. If so, states must decide which controls they can impose on localities. The less expensive alternatives are most likely to be controversial because they would severely restrict district tax choices and in many instances leave taxpayers in poor districts making a substantially greater tax effort than taxpayers in wealthy districts. Alternatives that would give taxpayers in poor districts some tax relief or allow school districts much greater freedom to choose their rates are also most likely to be controversial because they would require much more state money. For the federal government, the first policy decision involves whether reduced funding gaps should continue to be the main focus of federal programs encouraging equalization or whether these programs should also focus on states’ efforts to equalize funding between poor and wealthy districts. The second decision involves whether to increase targeting to poor students, knowing that such targeting can affect funding equalization. The share of education funding a state finances compared with local funding and its effort to target that funding to poor districts determine a state’s equalization effort. (See ch. 2.) The higher its share of total funding, the less a state needs to target that funding to poor districts to achieve a given equalization effort. The decision to increase the state funding share or the state targeting effort is difficult for most states because it addresses who will pay for increased equalization. A decision to increase the state funding share is a decision to fund equalization from state government resources. A decision to increase targeting effort is a decision to redistribute existing state funding from wealthier districts to poorer districts–-in essence, having wealthier districts bear part of the cost to increase equalization. Where funding gaps are particularly great and the state funding share is relatively low, increased targeting might also involve redistributing local funding from wealthy to poor districts. Such recapturing can also be contentious. In addition, reducing local funding and holding state funding steady would also increase equalization by increasing the state share of total funding. Although this action would effectively increase equalization effort, it would also reduce total education funding in the state—which might have harmful effects. States must also decide whether to control local tax behavior. Although a state might reduce funding gaps without such constraints, those reductions would not be certain. (See ch. 3.) Constraining local tax behavior may be controversial, however, because it means the state will partially control local choices on spending for education services and, in some cases, raise taxes. For example, mandating that all districts maintain their local effort would be the state’s less costly option for reducing or eliminating funding gaps. (See ch. 4.) This choice, however, would keep poor districts with high tax efforts from using any new state funding to obtain even modest tax relief. By mandating an equal tax effort instead, states may be able to give tax relief to poor districts with high tax efforts, but this choice may raise taxes in many other, often wealthier, school districts. It also would be more costly for the state to implement. Options to maintain or to equalize local tax efforts would limit the funding districts could raise for education services as well. The tax constraint option that allows the greatest degree of local choice involves the state setting a minimum tax effort. This option would be difficult to implement, however, because the statewide minimum effort must be at least equal to the tax effort of the state’s wealthiest districts; a lower tax effort by poor compared with wealthy districts would exacerbate funding gaps. The state would have to regularly monitor district tax efforts statewide and, if necessary, raise the minimum effort to lessen the funding gaps. For the federal government, the difficulty is determining whether federal programs encouraging equalization should continue to focus only on reduced funding gaps between poor and wealthy districts or whether these programs should also consider the extent to which state policies are responsible for reducing those gaps. Two federal programs with equalization components operate to effectively reward a state for reducing funding gaps even if the state has not made much effort to equalize funding. Some states with low funding gaps have accomplished this outcome in part through extraordinary taxpayer effort in the poorest school districts. To encourage states to increase their equalization effort and reduce funding gaps among districts, federal policymakers could use both a performance indicator of state equalization effort and an indicator of funding gaps to reward states for their performance. To encourage states to increase their equalization effort, regardless of its impact on funding gaps, federal policymakers could replace the performance indicator of a state’s funding gap with one that measures only state efforts to equalize funding. In either case, a performance indicator of state equalization efforts used in combination with or instead of an indicator of funding gaps would better ensure that federal policy rewards those states whose funding policies lead to greater funding equity. If federal policymakers want to encourage greater state efforts to reduce funding gaps between poor and wealthy districts, then the Congress may wish to consider establishing additional incentives or incentives different from those that federal programs now have. The Department of Education provided written comments and suggested changes on a draft of this report (see app. VI). We revised our report on the basis of these comments and suggestions as they related to federal education programs when applicable. The Department said that this report provides important information on how well state funding is targeted to poor school districts. In addition, the Department noted, as we have shown in an earlier report, School Finance: State and Federal Efforts to Target Poor Students (GAO/HEHS-98-36, Jan. 28, 1998), that federal funds are more targeted to poor students than state funds and that federal education funding plays an important role in improving equity. Department officials said, however, that a federal policy with financial incentives for encouraging states to equalize funds would probably be insufficient without a substantial increase in funding for the title I and Impact Aid programs. In addition, they said that such a policy pursued under title I Education Finance Incentive Grants would shift funds from high-poverty states to low-poverty states under the current formula. We acknowledge that this redistribution of funds between states could occur under the current title I Education Finance Incentive Grant formula. In two previous reports, Remedial Education: Modifying Chapter 1 Formula Would Target More Funds to Those Most in Need (GAO/HRD-92-16, July 28, 1992) and School Finance: Options for Improving Measures of Effort and Equity in Title I (GAO/HEHS-96-142, Aug. 30, 1996), we provided suggestions to the Congress on how to improve targeting to states with high numbers of poor students. If those suggestions were adopted along with a performance measure encouraging states to increase equalization effort as suggested in this report, better equalization could be encouraged with the result of more funding to high-poverty states.
Pursuant to a congressional request, GAO reviewed how well state funding is targeted to poor school districts. GAO noted that: (1) two key factors help reduce the size of the funding gap between poor and wealthy districts: (a) the extent to which a state's poor districts make a greater tax effort than the wealthy districts; and (b) a state's effort to compensate for differences in district wealth through its equalization policies; (2) poor districts in most states made a greater tax effort than the wealthy districts, according to GAO's research; (3) characterizing state equalization efforts is much more complex, however, than analyzing districts' tax efforts; (4) a state's equalization effort consists of two parts: (a) the proportion of education funding financed by the state government; and (b) the degree to which states target funds to poor districts; (5) of these two, state share has more impact on state equalization policies; (6) in effect, equalization policies determine the extent to which a state enables its districts to provide the state average funding level when all districts make an equal tax effort; (7) the most equalized school finance system would enable districts' per pupil funding to be 100 percent of the state's average per pupil funding for an equal tax effort in all districts; (8) the average state equalization effort was 62 percent, according to GAO's analysis; (9) states ranged from a high of 87 percent in Arkansas and Kentucky to a low of about 13 percent in New Hampshire; (10) increased equalization effort in the four states GAO reviewed in detail showed mixed results in reducing funding gaps between poor and wealthy districts; (11) to more successfully address funding gaps, most states would have to increase state equalization effort and impose some constraints on local tax efforts; (12) the amount of money required to reduce these funding gaps and the type of constraints needed depend on the degree to which a state may want to reduce the gap and the degree to which a state wants to equalize the local tax burden among districts; (13) GAO found that without constraints on local funding, districts in Louisiana and Rhode Island adjusted their tax effort in a way that undermined increases in the state's equalization effort; (14) regarding equalization effort, a state could choose to increase its share of total education funding, increase its targeting effort so that state aid would favor poor districts to a greater extent, or increase both; and (15) relying mainly on increasing its share of total funding would allow a state to bear most costs involved with increasing equalization effort.
The vulnerability of the international travel system to terrorists crossing international borders to perpetrate terrorist acts against countries’ citizens became a major concern after the terrorist attacks of September 11, 2001. Subsequently, Congress passed a series of laws that called for various measures to address weaknesses in U.S. and other countries’ foreign travel systems. The Intelligence Reform and Terrorism Prevention Act of 2004 directed the NCTC to submit to Congress a strategy for combating terrorist travel. In 2006, the NCTC issued the National Strategy to Combat Terrorist Travel. One of the strategy’s two pillars was to enhance U.S. and foreign partner capabilities to constrain terrorist mobility overseas. Among the pillar’s objectives were to suppress terrorists’ ability to cross international borders and help partner nations build capacity to limit terrorist travel. The Intelligence Reform and Terrorism Prevention Act of 2004 established the interagency Human Smuggling and Trafficking Center (HSTC) to serve, in part, as a clearinghouse for all U.S. agency information on preventing terrorist travel, and to submit annual assessments of vulnerabilities in the foreign travel system that may be exploited by international terrorists. Later, the Implementing Recommendations of the 9/11 Commission Act of 2007 called for the HSTC to serve as the focal point for interagency efforts to integrate and disseminate intelligence and information related to terrorist travel. The 2007 Act directed DHS, with the cooperation of other relevant agencies, to ensure that HSTC have no less than 40 full-time positions, including, as appropriate, detailees from DHS, State, DOJ, DOD, NCTC, the Central Intelligence Agency, the National Security Agency, and the Department of the Treasury. Presently, DHS’ U.S. Immigration and Customs Enforcement (ICE) provides the director of the center, which includes personnel from State, DHS, and the U.S. intelligence community. NCTC and HSTC jointly issued the first terrorist travel vulnerability assessment in 2005, and HSTC issued additional terrorist travel vulnerability assessments in 2008 and 2009. The assessments synthesize information and analyses from key stakeholders throughout the U.S. government. Specifically, HSTC officials review intelligence and other information from all relevant agencies; attend interagency working groups, interagency intelligence meetings, and other coordination meetings related to terrorist travel; review open source information from banks, nongovernmental organizations, and multinational organizations; and consult with agencies responsible for implementing programs. All relevant agencies are given the opportunity to review and comment on the drafts. Various U.S. agencies and subagencies are involved in providing capacity building related to enhancing countries’ ability to prevent terrorist travel abroad. As shown in figure 1, counterterrorism as a whole, including preventing terrorist travel, is overseen at the policy level by the Office of the Director of National Intelligence and by the National Security Council. The Director of NCTC reports both to the President regarding executive branch-wide counterterrorism planning, and to the Director of National Intelligence regarding intelligence matters. NCTC follows the policy direction of the President and the National Security Council. State, DHS, DOD, and DOJ fund and/or implement the majority of the capacity- building programs. Within the Department of State, the Office of the Coordinator for Counterterrorism (S/CT), in addition to funding and implementing capacity-building programs, has a leading role in developing coordinated strategies to defeat terrorists abroad and securing the cooperation of international partners. S/CT works with all appropriate elements of the U.S. government to ensure integrated and effective counterterrorism efforts, and coordinates and supports the development and implementation of all U.S. government policies and programs aimed at countering terrorism overseas. As shown in table 1, the U.S. government has identified four key gaps in foreign countries’ capacity to prevent terrorist travel overseas. HSTC and NCTC vulnerability assessments have identified sharing information about known and suspected terrorists as one key gap in foreign partners’ capacity to prevent terrorist travel. For example, some countries do not have their own database systems with terrorist screening information or access to other countries’ terrorist screening information, which contains biographical or biometric information about individuals who are known or suspected terrorists. Even when countries have terrorist screening information, they may not have reciprocal relationships to share such information or other travel-related information, such as airline passenger lists, with other countries, thereby limiting their ability to identify and prevent the travel of known and suspected terrorists. In addition, some countries do not have access to or fully use biometric information, which provides a unique identifier for each person, such as a fingerprint. For example, Pakistan has a centralized fingerprint database, but it is not shared across all law enforcement agencies, making the database less comprehensive and, as a result, more difficult for Pakistani government officials to prevent potential terrorists from traveling. A second key gap in foreign partners’ capacity relates to their ability to address the use of fraudulent travel documents. For instance, in many countries, fraudulent travel documents, including fraudulent passports and visas, are easy to obtain, and could thereby be used by people who want to travel under a false identity. In addition, some countries’ failure to consistently report lost or stolen passports to the International Criminal Police Organization (INTERPOL) or to access INTERPOL’s database that stores information on lost and stolen passports, can facilitate the use of legitimate passports by imposters. According to U.S. embassy officials in Kenya we spoke with, this is a common occurrence in Kenya, where individuals with a similar appearance to a Somali-American with a legitimate travel document will fraudulently use this travel document for illicit travel. Another common issue related to fraudulent travel documents is using fraudulent “breeder documents,” such as birth certificates and drivers’ licenses issued to support a person’s false identity, to obtain genuine passports. The issue of fraudulent documents is further compounded by the lack of a requirement for a visa to some countries. For example, according to a former Pakistani official that had responsibilities related to immigration enforcement, fraudulent British passports are the most prevalent type of fraudulent travel document in Pakistan. Since British citizens are not required to obtain visas to travel to many countries, a terrorist could use one of these fraudulent passports to travel to many countries without further background checks that would occur through a visa adjudication process. The third key gap identified in the NCTC and HSTC assessments is a lack in some countries’ abilities to ensure the security of their passport issuance systems. The passports from some countries are of low quality and are therefore easily stolen or counterfeited. For example, 18 countries still use passports that are not machine readable and almost half of countries use passports without biometric information stored electronically inside the passport. Such biometric information can include facial and fingerprint data, and can be used to authenticate the identity of travelers. In addition, once countries convert to biometric passports, previously issued passports may be valid for up to 10 years from their issuance dates. A fourth key gap in some foreign countries’ capacity to prevent terrorist travel is in combating corruption in passport issuance and immigration agencies. Corruption in government agencies relevant to travel can facilitate the illicit travel of terrorists or other criminals. For example, corruption in passport issuance agencies can allow potential terrorists to obtain genuine passports under a false identity or blank passports that can be easily manipulated. U.S. embassy officials in Kenya told us that such false passports can be obtained for just a few hundred dollars in some cases. Further, corruption within countries’ immigration agencies, such as border patrol or civil aviation officials with immigration duties, leaves a country’s immigration system vulnerable to human smugglers and traffickers who often have established relationships with these corrupt officials. For example, according to U.S. embassy officials in Kenya, illicit travel facilitators are known to stand outside the airport and indicate to corrupt immigration officials through the window which individuals they should let pass without checking their passports. In addition, according to the HSTC terrorist travel vulnerability assessments, countries that are known for having corrupt immigration officials are more likely to be used by terrorists as transit countries so that the terrorists can avoid interdiction. U.S. government foreign capacity-building programs and activities address to some degree most of the key gaps identified by the U.S. government in foreign governments’ ability to prevent terrorist travel overseas. As shown in table 2, three of the four key gaps—sharing information about known and suspected terrorists, addressing the use of fraudulent travel documents, and ensuring passport issuance security—have been the subject of some programs and activities. However, with regard to U.S. programs addressing the use of fraudulent travel documents, we found potential for overlap and duplication of effort, as multiple agencies that fund and implement numerous training courses do not always coordinate their activities. While the U.S. government has many efforts aimed at helping foreign countries to combat corruption, none focus on the fourth gap of corruption related to passport issuance and immigration agencies. Multiple federal efforts are aimed at improving information sharing about known and suspected terrorists. First, State/S/CT’s Terrorist Interdiction Program (TIP) enables immigration officials in countries at risk of terrorist activity to identify the attempted travel of known or suspected terrorists through the provision of a computerized system called the Personal Identification, Secure Comparison, and Evaluation System (PISCES). TIP provides participating countries with the PISCES software, hardware, and equipment to operate the software; any required maintenance and expansion of the system; and training on how to use it. During fiscal year 2010, the PISCES system processed an estimated 150,000 travelers per day entering or exiting 17 participating countries through ports of entry with PISCES installations. In fiscal year 2010, State began to upgrade the PISCES software with biometric capabilities that further enhance host countries’ capacity to interdict terrorists attempting to travel under a false identity. Second, State’s Bureau of International Narcotics and Law Enforcement Affairs (INL) has funded at least two projects to provide different types of database systems to foreign law enforcement authorities to help them screen for potential terrorist or criminal travelers. These projects are implemented through the DOJ/Criminal Division’s International Criminal Investigative Training Assistance Program (ICITAP), a broad law enforcement development program that caters its program offerings to fit the host country’s needs. First, in Bosnia and Herzegovina, ICITAP has provided the State Police Information Network to Bosnian border officials to allow them to link to INTERPOL databases to identify criminals who could then be denied entry to the country. Second, ICITAP has provided a separate system, the Total Information Management System, to Albania to enhance the country’s capacity to screen for known terrorists. According to State, the governments of Kosovo and Albania are discussing adapting certain elements of the Total Information Management System for use in Kosovo as well. Third, INL and State’s Bureau of International Security and Nonproliferation have provided funding to DHS’ U.S. Customs and Border Protection (CBP) to arrange trips for foreign officials to come to the United States to learn about how CBP uses and analyzes terrorist screening information. These trips are organized through the International Visitors Program, through which CBP arranges briefings and visits to CBP operations in the United States by foreign high-level customs and other law enforcement officials who perform or manage functions similar to those encompassed within CBP’s area of responsibility and expertise. In fiscal year 2010, CBP organized 22 visits by foreign officials for this purpose. Fourth, the United States enhances other countries’ ability to prevent terrorist travel abroad by sharing terrorist screening information with other countries. Under Homeland Security Presidential Directive 6 (HSPD- 6), the Terrorist Screening Center within the DOJ’s Federal Bureau of Investigation (FBI) and the Terrorism Information Sharing Office within State/S/CT negotiate agreements with foreign countries to systematically share terrorist screening information, thereby enhancing both countries’ abilities to prevent terrorist travel abroad through immediate and systematic access to information on known and suspected terrorists. Once the United States has signed an HSPD-6 agreement with a foreign country, the Terrorist Screening Center then shares the information agreed to with the foreign partners. As of May 2011, the Terrorist Screening Center shared terrorist screening information with 23 foreign countries. In addition to the systematic information sharing on known and suspected terrorists that occurs through HSPD-6 agreements, the Terrorist Screening Center also has had approximately six one-time arrangements for sharing terrorist screening information with countries hosting special events. Fifth, DHS leads an interagency negotiating team, on which State/S/CT and State’s Bureau of European and Eurasian Affairs also serve, that is involved in renegotiating a 2007 agreement between the United States and the European Union on the exchange of Passenger Name Records data. Once a country has the capacity to analyze this type of information provided by airlines on its passengers, the country is able to prescreen airline passengers against terrorist screening information, thereby helping them to prevent terrorists from traveling abroad. The European Union is now considering developing such a system and CBP has hosted officials from the European Union for briefings on how the United States analyzes Passenger Name Records data. According to State and DOJ officials, capacity-building efforts related to information sharing about known and suspected terrorists face some challenges. Some countries have expressed concerns about the privacy and protections related to the sharing of sensitive terrorist screening information. For example, European countries that have negotiated HSPD-6 agreements with the United States have been concerned about data protection, redress, and privacy policies and procedures in both utilizing terrorist screening information from the United States and sharing terrorist screening information with the United States because of differences between U.S. and European laws. According to officials from the Terrorist Screening Center, such differences can include the countries’ statutes of limitations that delineate how long they can keep derogatory information. According to State officials, another related challenge is that providing information to foreign countries involves a loss of control over the information and creates the possibility that the information could be compromised through internal corruption. To address both challenges, the United States and the foreign governments negotiate on specific information-sharing mechanisms and protections that are feasible and acceptable to both sides. Seven different U.S. government entities across three federal agencies are involved in providing fraudulent travel document training to foreign government officials, as shown in figure 2. In delivering the training, agencies have similar objectives and often provide the training to the same populations (e.g., immigration officials and law enforcement officials) to develop their skills in recognizing the characteristics of altered, counterfeit, or other fraudulent travel documents. U.S. law enforcement officials working overseas from DHS/ICE and State’s Bureau of Diplomatic Security (DS) provide the bulk of training in the recognition of fraudulent travel documents to foreign immigration and law enforcement officials. Specifically, attachés from DHS/ICE and in-country representatives from State/DS provide such training under the dual objectives of preventing terrorist travel and protecting U.S. interests. For example, in fiscal year 2010, ICE attachés provided 360 training courses, briefings, and outreach sessions on fraudulent travel document recognition and State/DS staff posted overseas provided 458 related training courses. In addition, State/S/CT and State/DS implement the Anti-Terrorism Assistance (ATA) program, which focuses on building foreign law enforcement officers’ counterterrorism capabilities. ATA provides fraudulent travel document recognition training as part of achieving program goals related to preventing terrorist travel abroad. In fiscal year 2010, 12 of the more than 350 courses provided by ATA were fraudulent travel document recognition courses. These courses were provided to law enforcement officials from 17 of the approximately 60 countries that received ATA training in fiscal year 2010. Other U.S. foreign capacity-building programs have implemented fraudulent travel document recognition courses, although their missions are not directly related to preventing terrorist travel abroad. State/INL provides funding for U.S. law enforcement agencies, including ICE, CBP, and the FBI, to implement the International Law Enforcement Academies (ILEA), which provide a general law enforcement training program that also includes some specialized training on how to combat certain criminal activities, including fraudulent travel documents. In fiscal year 2010, the ILEAs provided two courses specifically on fraudulent travel document recognition to law enforcement officials from 13 countries, as well as having training on this topic provided by ICE as part of the general law enforcement training offered at the ILEA in San Salvador that was delivered five times that fiscal year. In addition, State/INL has provided funding to multiple entities to provide training in fraudulent travel document recognition. First, State/INL provides funding to CBP for related training, such as for fraudulent travel document training provided to Moroccan officials in fiscal year 2010 and for CBP’s International Visitors Program, which, in fiscal year 2010, arranged six trips to the United States for foreign officials to learn how to recognize fraudulent travel documents. Also, State/INL has provided funding to the Organization of American States to deliver training in fraudulent document recognition throughout the Western Hemisphere and to the United Nations Office on Drugs and Crime to develop a manual on how to examine travel documents to determine their authenticity. The Transportation Security Administration (TSA) within DHS funds Aviation Security Sustainable International Standards Teams, which build select countries’ aviation security through related training, technical assistance, and overall security assessments, in cases when these countries are having difficulty meeting International Civil Aviation Organization (ICAO) aviation security standards. In fiscal year 2010, as part of this effort, TSA funded one fraudulent travel document training course in Liberia, which was taught by ICE and CBP, as part of fulfilling that country’s needs to meet ICAO standards related to detecting fraudulent travel documents. CBP’s Office of International Affairs has funded some fraudulent travel document recognition training related to its mission to enhance international border security. In fiscal year 2010, CBP funded one course in fraudulent document recognition for Mexican law enforcement officials. In addition to training provided by ICE attachés, ICE’s Office of International Affairs funds some additional fraudulent travel document recognition training courses, which involve ICE officials traveling from Washington, D.C., to instruct the courses. In fiscal year 2010, ICE funded four such training sessions for representatives from at least nine countries. Finally, the FBI has at times been involved in the provision of fraudulent travel document recognition training to foreign law enforcement officials, although it did not fund or implement any such training in fiscal year 2010. In March 2011, the FBI organized a training session for Indonesian officials in that country’s police, state intelligence, public corruption commission, customs, immigration, military, and prosecutor’s offices, a portion of which involved fraudulent travel document training that was provided by ICE and State/DS. Our past work on issues that cut across multiple agencies shows that without a coordinated approach, programs can waste scarce funds and limit the overall effectiveness of the U.S. government’s efforts. GAO has found that, while collaboration among federal agencies can take different forms, practices that generally enhance collaboration include agreeing upon agency roles and responsibilities and identifying and addressing needs by leveraging resources. GAO has further suggested that program officials require sufficiently detailed information to enable them to carry out their duties and responsibilities effectively, while collaborating when necessary to increase their efficiency. State/S/CT officials told us they were unaware of how many agencies and subagencies are involved in providing fraudulent travel document training to foreign officials, and they had not developed any mechanism to encourage coordination among all the parties involved. At the country level, we found that agency officials at two of the posts we visited did not always collaborate on the delivery of fraudulent travel document recognition training. As a result, some planned training was duplicative and did not make an effective use of limited resources. For example, during our March 2011 visit to Pakistan, we identified two agencies planning to provide fraudulent travel document recognition training courses in April 2011 to Pakistani officials from the same agency without coordinating with one another. The ICE attaché planned one course that had a full roster of students but lacked funding, while ATA was simultaneously planning to hold two fully-funded fraudulent travel document courses in the same month although they had no students signed up for either course. Meanwhile, the ICE attaché had been certified through a train-the-trainer course provided by ICE’s Forensic Document Laboratory to be an instructor for fraudulent travel document recognition courses. Since ATA program officials were unaware of the existence of this local resource, the ATA program was still attempting to find two instructors from ICE to travel to Pakistan to teach the courses they were planning. In addition to potentially adding to program costs by not using the locally available instructor, this lack of coordination also could have unnecessarily increased demand on the Forensic Document Laboratory’s resources. The Forensic Document Laboratory is one of the primary sources of instructors for ATA courses in fraudulent travel documents. Officials from the Forensic Document Laboratory in Washington, D.C., told us they provide train-the-trainer courses to make up for their lack of sufficient staff to fulfill all the training requests from overseas programs like ATA. In Kenya, we found that representatives from two U.S. agencies, State and DHS, deliver fraudulent travel document training but do not collaborate. The ATA program, which is run by a contractor hired by State/DS in Kenya, provided approximately one course per year from fiscal year 2007 to 2010 in fraudulent travel documents to police and security officers, customs and immigration officers, forensic specialists, and training officers. A representative of State/DS posted overseas also provides many training courses in fraudulent travel documents for immigration officials. The CBP attaché, who represents DHS at the post, has provided many training courses on this topic to airport and border officials, as well as speaking on the topics of fraudulent travel documents, imposter recognition, and human trafficking to students in the Kenyan Immigration Service’s basic training. Despite these three representatives providing this similar training, a representative from one of the agencies stated that although he coordinated with other countries providing similar training in Kenya, he did not do so with other U.S. agencies. State’s Bureau of Consular Affairs attempts to build foreign partners’ capacity to address the issue of fraudulent travel documents by encouraging countries to report lost and stolen passports to INTERPOL and to access INTERPOL’s database to check against travelers arriving at ports of entry to identify and interdict people misusing passports. According to INTERPOL, as of June 2011, the total number of countries contributing lost and stolen passport information was 158; and some of these have connected border checkpoints to INTERPOL’s system for automated checking against its database. To facilitate the interdiction of people misusing lost and stolen passports, Consular Affairs also assisted in the drafting of a set of global standards for national management of lost and stolen passport data, which was provided to ICAO for adoption as a part of the global travel document standards. DHS’ Office of Policy has also played a role in enhancing other countries’ capacity to report information about lost and stolen passports. First, they have participated in ongoing efforts to revise INTERPOL’s procedures for the reporting of lost and stolen passport information to enhance the capabilities and compliance of such reporting by INTERPOL members. Similarly, to improve foreign partners’ ability to detect fraudulent travel documents, DHS’ Office of Policy has provided technical assistance towards the development of a pilot program to enhance the sharing of information related to fraudulent document alert data between members of the Group of Eight and INTERPOL. Two agencies, State and USAID, have undertaken foreign capacity- building activities to improve other countries’ passport issuance security. State’s Bureau of Consular Affairs, with its mission of issuing secure U.S. passports to traveling Americans, is involved in some efforts to enhance foreign countries’ passport issuance security. Consular Affairs has contributed to diplomatic efforts through ICAO to promote other countries’ use of machine-readable passports and passports with biometric features. For example, it was involved in the development and promotion of ICAO’s standards for machine-readable passports published in September 2006. These standards are related to a requirement that countries use machine-readable passports by April 2010, and also provided specifications for biometric enhancements that could be made to electronic passports. Consular Affairs has also, since 2009, provided briefings to representatives from over 50 passport issuance authorities on the elements of secure passports. For example, in 2010, Consular Affairs organized the training of a delegation from Turkey’s passport office in Washington, D.C., which included briefings and organized tours of the Washington Passport Agency and the U.S. Government Printing Office. State/INL is funding Consular Affairs to provide passport antifraud training to officials from foreign passport issuance agencies, which will first be piloted in fall 2011. This training is designed to improve the integrity of other countries’ passports and passport issuance by helping them institute organizations, processes, and procedures for detecting fraudulent passport applications as part of their adjudication and issuance processes. In addition, USAID provided technical assistance to the Paraguayan Ministry of Interior and National Police to reform Paraguay’s identification system, including its national identity cards and passports. According to USAID, the prior identification system in Paraguay was not in compliance with international security standards and was vulnerable to corruption. Implementation of the new integrated national identity card and passport system involved providing information technology improvements, as well as training on how to collect citizens’ biometric data and on how to manage the new system. Entries in the new national database now include biometric identifiers, including fingerprints, photographs, and signatures, all of which are automatically verified upon entry into the database for their compliance with international standards. In addition, passports were redesigned and upgraded to ICAO requirements, resulting in more secure documents that are less susceptible to fraud. While the U.S. government, through USAID and Millennium Challenge Corporation (MCC) anticorruption foreign capacity-building programs and State-led diplomatic efforts, has many efforts aimed at helping foreign countries to combat corruption, no U.S. government effort focuses directly on combating corruption in countries’ passport issuance and immigration agencies. USAID has developed a wide range of programs for fighting corruption, often fit to the needs and opportunities of the recipient country. Some USAID anticorruption programs focus on a few specific sectors, including tax collection, customs collection, and the financial sector. In addition, USAID also has programs that have a broader effect on combating corruption, such as civil society programs to increase public awareness, promote citizen involvement and participation, and encourage civil society oversight of government; programs to decentralize powers to local governments; rule of law programs to improve the justice sector and thereby the ability to prosecute corruption cases; and programs to build anticorruption agencies within foreign governments. While not specifically targeting passport and immigration agencies, these broad anticorruption programs may have a beneficial, indirect effect on these countries’ abilities to combat corruption in passport issuance and immigration agencies, thereby indirectly helping to prevent terrorist travel abroad. Similarly, MCC has multiple anticorruption efforts across the 38 countries to which the MCC provides assistance. These anticorruption efforts include encouraging countries to: pass stronger anticorruption laws, strengthen oversight institutions, open up the public policy-making process to greater scrutiny, and increase corruption-related investigations and prosecutions. Such efforts, although not directly focused on passport issuance and immigration agencies, also may have a beneficial, indirect effect on these countries’ abilities to combat corruption in these agencies, thereby indirectly helping to prevent terrorist travel abroad. In addition, State has been involved in diplomatic efforts to discourage corruption in foreign countries. Multilaterally, State has advocated for the implementation of the UN Convention against Corruption, which came into force in December 2005 and provides a comprehensive set of standards, measures, and rules that all countries can apply in order to strengthen their legal and regulatory regimes to fight corruption. State has encouraged and provided financial support for the development and launch this year of a peer review process through which countries will show how they are complying with their commitments under the UN Convention. In many countries, as well as through regional workshops in Africa, State engages in efforts to encourage or support countries in combating corruption, such as through encouraging the investigation and prosecution of corruption cases. While none of these efforts focus directly on passport issuance or immigration agencies, their goal is to strengthen overall the laws, institutions, and capacity to prevent and prosecute corruption, which, according to State, intend to impact the integrity and effectiveness of all government functions and agencies. The U.S. government lacks performance measures to assess governmentwide progress in closing the key gaps in foreign partners’ capacity to prevent terrorist travel overseas. Performance measurement enables decision makers to make informed policy and budget decisions. At the national level, U.S. counterterrorism strategies lack performance measures related to capacity building to prevent terrorist travel. Similarly, neither State, DOD, DHS, DOJ nor USAID has established such measures to accompany their agencywide strategies. Components of some agencies have relevant performance measures at the program level, but they cover only one of the four key gaps. Without comprehensive measures that encompass all U.S. government agency efforts, the U.S. government cannot determine governmentwide progress in building foreign partners’ capacity to prevent terrorist travel. As we have previously reported, performance information is essential to enable decision makers to make informed decisions. Specifying performance metrics is one tool used in evaluating the effectiveness of government efforts. Agencies can also use performance information to make various types of management decisions to improve programs and results. In addition, as we have also reported, many federal efforts transcend more than one agency. Closing the gaps in foreign partners’ capacity to prevent terrorist travel is an example of such an issue, since it involves efforts funded and implemented by several agencies. In such situations, we have reported that it is important to have full information on how cross-cutting goals will be achieved. The Intelligence Reform and Terrorism Prevention Act of 2004 highlighted the importance of constraining terrorist travel and directed NCTC to submit a strategy that combined terrorist travel intelligence, operations, and law enforcement into a cohesive effort to intercept terrorists, find terrorist travel facilitators, and constrain terrorist mobility domestically and internationally. The resulting NCTC 2006 National Strategy to Combat Terrorist Travel lists some U.S. government activities related to helping partner nations build capacity to limit terrorist travel but contains no performance measures to assess governmentwide progress. Similarly, the National Security Council, which coordinates national security and foreign policy among various U.S. government agencies, issued the National Strategy for Combating Terrorism in September 2006, which established the goal of disrupting terrorist travel internationally through various means, including building international capacity to secure travel and combat terrorist travel. In June 2011, the President issued the National Strategy for Counterterrorism, which again highlighted the importance of enhancing the capacity of foreign partners to prevent terrorist travel across national borders. However, these unclassified strategies lack performance measures related to foreign capacity building to prevent terrorist travel. We examined individual agency strategies for the agencies funding and/or implementing foreign capacity-building programs and activities related to preventing terrorist travel, including for State, DHS, DOD, DOJ, and USAID. We found that each agency’s strategy acknowledged the important role the agency plays in combating international terrorism. However, none of the agencies’ strategies contained performance indicators to measure progress related to helping countries close the key gaps in their ability to prevent terrorist travel. Some agency components have made efforts to track the performance of their specific program efforts aimed at improving information sharing about known and suspected terrorists—one of the four key gaps. None of the agencies have performance measures related to the other three key gaps in foreign partners’ capacity to prevent terrorist travel. Related to information sharing, State’s S/CT and Director of U.S. Foreign Assistance have performance indicators for TIP that address sharing information on known and suspected terrorists. In fiscal year 2009, S/CT created the performance indicator—the percentage of the highest priority countries capable of screening for terrorists through TIP/PISCES that receive biometric capabilities. The target for that performance indicator for fiscal year 2010 was that 50 percent of the 17 countries currently supported by TIP would have biometric capability. No fiscal year 2010 results have yet been publicly reported for this measure. The Director of U.S. Foreign Assistance’s performance measure for TIP is the number of ports of entry supported by TIP. Figure 3 shows the increase in the number of ports of entry supported by TIP, and the annual targets, from 2006 to the present. State’s country-level plans also sometimes contain performance measures for U.S. counterterrorism efforts in that country. For example, State has performance measures in its 2012 mission strategic plans for Kenya and Thailand. For Kenya, the performance measure is—the government of Kenya should demonstrate capacity and resolve to prevent and respond to threats of terrorism by, among other things, expanding TIP/PISCES coverage to additional border crossings. For Thailand, the performance measure is—Thailand should develop effective export control and border security systems that meet international standards by installing new software for TIP/PISCES at targeted airport locations and expanding the program to new ports of entry. Finally, DOJ/FBI also has two performance measures related to the information sharing gap that assess the Terrorist Screening Center’s efforts to share terrorist screening information with foreign partners. The FBI has not set targets for either of these measures. Overall, these relatively narrow agency-specific measures that exist do not provide a comprehensive basis for assessing governmentwide progress in building foreign partners’ capacity for two reasons. First, they necessarily focus on specific program efforts, not governmentwide progress. Second, they cover only one of the four key gaps in the capacity of foreign countries to prevent terrorist travel overseas. Inhibiting the movement of terrorists across international borders is a key part of the U.S. strategy for protecting the United States and its interests abroad. Although agencies have implemented significant new domestic programs to prevent terrorists from entering the United States, events of the past few years illustrate that the international travel system is only as secure as its weakest link. As a result, the United States seeks to enhance the capacity of its foreign partners to prevent terrorist travel overseas, with agencies implementing a variety of programs and activities to close key gaps in our foreign partners’ capacity. However, some of these efforts—such as improving foreign partners’ capacity to prevent the use of fraudulent travel documents—are not always well coordinated and create the risk of duplication and overlap. In light of the limited resources available to address these important issues, it is critically important to ensure that such resources are used efficiently. Further, while more than 5 years have passed since the National Strategy to Combat Terrorist Travel linked our foreign partners’ capacity to constrain terrorist travel to our own national security, the U.S. government still lacks an effective system for measuring and reporting progress toward the goal of enhancing our foreign partners’ capacity. As agencies implement the new National Strategy for Counterterrorism, it is important to focus on measuring, tracking, and reporting on governmentwide progress toward the goal of enhancing foreign partners’ capacity to prevent terrorist travel. Without such information, the U.S. government cannot efficiently assess the effectiveness of its efforts and planners and decision makers may lack information vital to addressing foreign policy needs and leveraging U.S. resources. In order to institute a coordinated approach for delivering fraudulent travel document recognition training overseas to ensure that U.S. agencies prevent overlap and duplication; and given State’s role in working with all appropriate elements of the U.S. government to ensure integrated and effective international counterterrorism efforts, we recommend that: State develop a mechanism for agencies involved in funding and implementing fraudulent travel document recognition training at overseas posts to coordinate the delivery of such training to foreign partners. To allow the U.S. government to determine the extent to which it is building foreign partners’ ability to prevent terrorist travel abroad and to make adjustments to improve its programs accordingly, we recommend that: The National Security Council, in collaboration with relevant agencies, develop a mechanism to measure, track, and report on U.S. progress across the government toward its goal of enhancing foreign partners’ capacity to prevent terrorist travel. We provided a draft of this report to State, DHS, DOD, DOJ, the Department of Transportation, USAID, NCTC, and the National Security Staff of the National Security Council. DHS and State provided written comments, which are reprinted in appendixes III and IV, respectively. State, DHS, DOJ, and NCTC provided technical comments, which we incorporated where appropriate. DOD, the Department of Transportation, USAID, and the National Security Staff did not provide any comments on the draft. In commenting on a draft of this report, State agreed with our recommendation that it should develop a mechanism to enhance coordination among the agencies involved in funding and implementing fraudulent travel document training overseas. State noted that efforts to enhance such coordination have begun at the country level, and that coordination in this area is also needed in terms of strategic, budget, and program planning at the agencywide and interagency levels. In addition, DHS, in its letter commenting on our report, indicated its commitment to working with other relevant agencies to stop terrorists from traveling across international borders, including through contributing to coordinated efforts to prevent any overlap and duplication. Regarding our recommendation to the National Security Council to work with relevant agencies to develop a mechanism to measure, track, and report on governmentwide progress toward its goal of enhancing foreign partners’ capacity to prevent terrorist travel, the National Security Staff did not provide any comment. However, in previous meetings with us, the National Security Staff acknowledged the need for such a mechanism. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of the report to the Secretaries of Defense, Homeland Security, Justice, State, and Transportation; the Administrator of the U.S. Agency for International Development; the Director of the National Counterterrorism Center; the National Security Staff of the National Security Council; and other interested parties or interested congressional committees. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has questions about this report, please contact me at (202) 512-7331 or at JohnsonCM@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members that made key contributions to this report are listed in appendix V. In this report, we (1) identified the key gaps the U.S. government has assessed in foreign countries’ capacity to prevent terrorist travel overseas, (2) evaluated how U.S. foreign capacity-building efforts address those gaps, and (3) assessed the extent to which the U.S. government is measuring progress in its efforts to close those gaps. Our work focused on the efforts of the Departments of State (State), Homeland Security (DHS), Defense (DOD), and Justice (DOJ) to build foreign partners’ capacity to prevent terrorist travel overseas. Within these agencies, we met with officials from several relevant components that are contributing to the U.S. government goal of enhancing foreign partners’ ability to prevent terrorist travel, including: State’s Office of the Coordinator for Counterterrorism (S/CT), Bureau of Diplomatic Security (DS), and Bureau of International Narcotics and Law Enforcement Affairs (INL); DHS’s U.S. Immigration and Customs Enforcement, U.S. Customs and Border Protection, Transportation Security Administration (TSA), and Office of International Affairs; and DOJ’s Federal Bureau of Investigation and Criminal Division. We focused on these agencies and components as a result of our assessment of agency efforts noted in the National Strategy to Combat Terrorist Travel, our review of information in previous and ongoing GAO work in counterterrorism and aviation security, and discussions with U.S. agency officials regarding the agencies with which they collaborate. To obtain examples of U.S. efforts and more in-depth understanding of specific participation in U.S. capacity-building programs designed to prevent terrorist travel overseas, we selected four countries in which to conduct field work. We selected Kenya, Pakistan, the Philippines, and Thailand, based on criteria that included: designation as a terrorist safe haven, presence of key U.S. agency personnel at post, and coverage of key regions to counterterrorism. In each location, we met with U.S. government personnel involved in capacity building to prevent terrorist travel abroad to learn about the key gaps in those countries’ abilities to prevent terrorist travel overseas, the types of capacity-building activities they undertake related to preventing terrorist travel, and how they measure progress and report results. We also met with foreign government officials in three of the four countries to learn about the challenges they face in improving their ability to prevent terrorist travel abroad and th perspectives on the effectiveness of U.S. efforts. To identify what the U.S. government has assessed to be the key ga foreign partners’ capacity to prevent terrorist travel o verseas, we reviewed the NCTC and Human Smuggling and Trafficking Center’s (HSTC) terrorist travel vulnerability assessments from 2005, 2008, and 2009. Based on interviews with the HSTC, we learned that these are the only comprehensive U.S. government assessments of vulnerabilities within the foreign travel system. We reviewed all three documents to identify the key gaps because, according to HSTC officials, each assessment is not comprehensive. Rather, they are additive, so the assessments tak together represent a full picture of the vulnerabilities. We performed our review of these assessments by noting instances when certain gaps, threats, vulnerabilities, or areas for improvement to the international travel system generally or related to specific foreign countries were discussed. For the purposes of this review, we considered gaps to be threats, vulnerabilities, and areas for improvement mentioned in the assessments. The parts of the assessments that identify vulnerabilities limited to the U.S. travel system were not included within our analysis since they did not relate to the scope of our review. To distinguish between the key gaps identified in these reports and other vulnerabilities e not key gaps, we reviewed the frequency with which identified that wer each gap/vulnerability was mentioned in the reports. The HSTC confirmed our summary of the key gaps and other vulnerabilities. We also consulte with agency officials at headquarters, the missions in our example countries, and the intelligence community to identify examples of the key gaps in each country and corroborate our findings. To evaluate how U.S. foreign capacity-building programs address those gaps, we examined relevant documents including program descriptions, and agency- and program-level strategic documents, including the 2012 Mission Strategic and Resource Plans. We conducted interviews with agency officials from State, DHS, DOJ, DOD, the Department of Transportation, and the U.S. Agency for International Development (USAID), in Washington, D.C., and in our example countries where officials were involved in relevant capacity-building programs. We also interviewed officials from the NCTC and National Security Staff. To show the level of different agencies’ involvement in the delivery of fraudulent travel document recognition training to foreign officials, we requested data from all relevant agencies on the number of such courses that they funded and implemented in fiscal year 2010. We determined that these data were sufficiently reliable for our purposes. To assess the extent to which the U.S. government is measuring progress in its efforts to enhance foreign partners’ ability to constrain terrorist travel overseas, we analyzed relevant U.S. planning and evaluation documents including the 2006 National Strategy to Combat Terrorist Travel, the 2006 National Strategy for Combating Terrorism, the 2008 National Implementation Plan for the War on Terror, and the 2011 National Strategy for Counterterrorism. We also reviewed the relevant agency strategic documents for State, DHS, DOD, DOJ, and USAID. The State documents included the fiscal year 2012 strategic and resource plans of the bureaus of S/CT, DS, INL and Consular Affairs as well as the fiscal year 2012 Mission Strategic and Resource Plans of our example countries. We determined that State’s data on performance indicators for the Terrorist Interdiction Program were sufficiently reliable for our purposes. To identify what have been the reported results of these efforts, we reviewed relevant agency reports including: State’s Annual Report o n Assistance Related to International Terrorism from fiscal year 2009, strategic and resource plans of the bureaus of S/CT, DS, INL and Con Affairs as well as the Mission Strategic and Resource Plans of our exam countries, DOJ performance reports, and the DHS Annual Performance Report for fiscal years 2008–2010. We also discussed progress with officials at headquarters and at the missions of our example countries. Multiple agencies are involved in many programs and activities to build the capacity of foreign countries to address vulnerabilities in their aviation and border security, as shown in table 3. Since countries can have b land and water borders, we include both land border and maritime security programs under border security. For both aviation and border security programs, we include only programs that include elements relating to preventing illicit passenger travel. We have not included other aviation or border security programs that focus only on preventing illicit cargo shipments. Key contributors to this report include Jason Bair, Assistant Director; Nina Pfeiffer; Heather Latta; Julia Jebo; Eileen Larence; Eric Erdman; Kevin Copping; Amber Keyser; Martin De Alteriis; Mary Moutsos; and Lynn Cothern. Additional support was provided by Thomas Lombardi, Jan Montgomery, and Justin Schnare.
Eliminating the threat of terrorist attacks continues to be a primary U.S. national security focus. According to the 9/11 Commission, constraining the mobility of terrorists is one of the most effective weapons in fighting terrorism. This report (1) describes key gaps the U.S. government has identified in foreign countries' capacity to prevent terrorist travel overseas, (2) evaluates how U.S. capacity-building efforts address those gaps, and (3) assesses the extent to which the U.S. government is measuring progress in its efforts to close those gaps. To identify the key gaps, GAO reviewed governmentwide assessments of vulnerabilities in the international travel system. GAO reviewed the strategies and documentation of U.S. agencies funding and/or implementing foreign capacity-building efforts to prevent terrorist travel overseas, including those of the Departments of State (State)--which coordinates U.S. efforts overseas--Defense (DOD), Homeland Security (DHS), Justice (DOJ), and the U.S. Agency for International Development (USAID). GAO also interviewed officials from the National Security Staff, of the National Security Council (NSC), which oversees counterterrorism policy. GAO met with these agencies and conducted field work in Kenya, Pakistan, the Philippines, and Thailand. The U.S. government has identified four key gaps in foreign countries' capacity to prevent terrorist travel overseas. U.S. government foreign capacity-building programs and activities address these gaps to varying degrees. For instance, as one of the U.S. efforts to enhance foreign partners' sharing of information about known and suspected terrorists, State's Terrorist Interdiction Program provides participating countries with hardware and software to develop, maintain, and use terrorist screening information. In fiscal year 2010, nearly 150 ports of entry overseas were using this program. With regard to addressing the use of fraudulent travel documents, GAO found the potential for overlap and duplication since seven components of three federal agencies are involved in providing training on fraudulent travel document recognition to foreign government officials, with no mechanism to coordinate such training. In two countries GAO visited, there was a lack of collaboration among agencies funding and implementing training on this topic. For example, in Pakistan, State and DHS were both planning to hold fraudulent travel document training for the same Pakistani agency during the same month without knowing of the other's plans. Regarding helping countries improve the security of their passport issuance, State and USAID have multiple efforts, including State's Bureau of Consular Affairs bringing delegations from foreign passport offices to the United States for briefings at passport-related agencies. Finally, the U.S. government has many efforts aimed at combating corruption overseas, such as encouraging countries to pass anticorruption laws. While these efforts are not aimed specifically at countries' passport and immigration agencies, they are intended to improve the effectiveness of all government functions. The U.S. government lacks performance measures to assess governmentwide progress in closing the key gaps in foreign partners' capacity to prevent terrorist travel overseas. None of the governmentwide or individual agency strategic documents GAO reviewed contained such measures. While components of State and DOJ have some performance measures related to information sharing, these measures do not provide decision makers with comprehensive information on governmentwide progress in enhancing foreign partners' capacity. GAO recommends that (1) State develop a mechanism to improve coordination of various agencies' efforts to provide fraudulent travel document training to foreign partners, and (2) NSC develop a mechanism to measure, track, and report on overall progress toward the goal of enhancing foreign partners' capacity to prevent terrorist travel overseas. State concurred with the first recommendation. NSC did not comment on the draft report.
Free over-the-air television broadcasts have been available to Americans for more than 50 years. According to the National Association of Broadcasters, the average television market includes over-the-air signals from at least seven local broadcast stations. Commercial stations may get their programming content through an affiliation with one of the top seven television networks (ABC, CBS, Fox, NBC, PAX, UPN, and WB) or they may be an independent broadcaster. Some commercial television stations are owned by large media companies or other corporations; others are owned by individuals or small companies. The United States also has 380 public television stations that receive funding from a variety of sources, including federal funding, state funding, commercial grants and donations, and private donations from individuals. Public stations tend to show more educational and arts programming, and many of these stations are affiliated with the Public Broadcasting Service (PBS). In addition to over-the-air availability, most broadcasters are carried on local cable television systems along with numerous cable programming channels. Also, satellite television providers now offer subscribers the signals of local broadcasters in approximately 40 television markets. In fact, according to FCC, more than 86 percent of television households nationwide now subscribe to some type of multichannel video programming service, such as a cable or satellite provider, rather than relying solely on over-the-air broadcast television. Since its inception, the broadcast television industry has relied on “analog” technologies to transmit over-the-air television signals. During the last few decades, however, media of all types have been transitioning to “digital” technologies. Because digital technologies can provide greater versatility and higher quality pictures and audio than traditional analog technologies, the broadcast television industry supported, and Congress and FCC mandated, a transition of broadcast television stations from analog to digital technology. This decision was based on the notion that a transition to digital television would bring the broadcast television industry into the 21st century with current and competitive technology, and would help to preserve for consumers the benefits of a healthy free over-the-air television service in the future. Traditional television broadcasting uses the radiofrequency spectrum to transmit analog signals—that is, signals in which motion pictures and sounds have been converted into a “wave form” electrical signal. Traditional analog signals fade with distance, so consumers living farther from a broadcast tower will experience pictures that are distorted or full of “snow.” With digital technology, the analog wave form is converted into a stream of digits consisting of zeros and ones. Although digital signals also fade over distance, because each bit of information is either a zero or a one, the digital television set or receiver can adjust for minor weaknesses in the signal to recreate the zeros and ones originally transmitted. Pictures and sound thus remain perfect unless significant fading of the signal occurs, at which point the transmission cannot be corrected and there is no picture at all. Digital technology also makes it easier to offer high definition television (HDTV). With HDTV, roughly twice as many lines of resolution are transmitted, creating a television picture that is much sharper than traditional analog television pictures. Another advantage of digital television is that “digital compression” technologies allow for more efficient use of the radiofrequency spectrum than analog technologies. Using digital compression, broadcasters will have the opportunity to use the 6 megahertz of spectrum required to broadcast one analog television show to transmit four or five different digital “standard definition” television shows simultaneously. This process of using the digital spectrum to show multiple programs at once is known as multicasting. To enjoy HDTV broadcasts or to be able to see multicasts of digital signals, consumers must own a television monitor that is capable of displaying these features and a digital tuner that is capable of receiving the broadcasts. The DTV transition involves a substantial overhaul and replacement of the stations’ transmitting and studio equipment as well as the eventual replacement of consumers’ analog television sets or the attachment of “digital converter boxes” to those analog sets. Thus, building DTV stations involves a large outlay of capital and effort by the broadcast television industry. Sometimes a new broadcast tower or significant modifications to an existing tower are required for the digital antennas. Broadcasters must purchase digital transmission equipment, obtain digital programming, and acquire equipment for converting analog programming to digital. One station representative with whom we spoke noted that broadcasters must then incur the costs of running two stations simultaneously during the transition period, even though viewership and advertising revenues are likely to remain roughly the same. To facilitate the transition, Congress and FCC temporarily provided each full-power television station (both commercial and public) with another 6 megahertz of radiofrequency spectrum so that they could begin broadcasting a digital signal. A transition period was established during which broadcasters would build their DTV stations and simultaneously transmit both analog and digital signals. In 1997, FCC established a timeline for this transition period. By May 1, 1999, the affiliate stations of the 4 largest networks (ABC, CBS, Fox, and NBC) in the top 10 television markets in the country were to have a digital signal on the air. By November 1, 1999, the affiliates of the 4 largest networks in the top 11 to 30 television markets were to have a digital signal on the air. By May 1, 2002, all full-power commercial television stations across America are to have a DTV signal on the air. By May 1, 2003, all public stations are to be broadcasting a DTV signal as well. The few stations that missed the earlier 1999 deadlines were granted extensions by FCC. In March 2002, FCC closed an application period for stations that have May 2002 deadlines to file for extensions. FCC said it will not issue any type of blanket waiver of the deadline, but it would allow extensions on a case-by-case basis. According to FCC, it also has the authority to sanction stations that do not meet their deadlines. FCC said it is currently considering what those sanctions might be and under what circumstances the sanctions might be imposed. The goal is for the transition period to end in December 2006. By that time, the analog signals presumably are to be shut off, and Americans are to be watching DTV broadcasts on either a DTV set or on an analog set with some form of a digital converter box. The government is supposed to “repack” the digital stations within channels 2 to 51. The federal government has reallocated some of the spectrum in channels 52 to 69 for public safety needs and some for commercial uses (such as mobile phone services). The public safety spectrum is currently being licensed, and the spectrum set aside for commercial uses will be auctioned later this year. However, Congress created exceptions to the 2006 date. In the Balanced Budget Act of 1997, Congress stated that no analog television station license was to be extended beyond December 31, 2006, except in cases where (1) one or more of the top four network affiliates in a market is not yet on the air in digital, (2) digital converter technology is not generally available in the market, or (3) 15 percent or more of the households in the market cannot receive DTV signals. The last exception—often referred to as “the 85 percent rule”—has the potential to significantly delay the cutoff of analog signals and the turnback of some spectrum beyond 2006. This is because the 85 percent rule might rely on consumer adoption of DTV equipment, which is currently market-driven (i.e., based on consumer demand, rather than on a government mandate) and does not appear at the present time to be progressing at a rate that will reach 85 percent in the next 4 years. In this section, we address several issues related to the progress of broadcasters to date in getting digital signals on the air. In particular, we discuss (1) the status of broadcasters in building the DTV stations; (2) the amount of digital, high definition, or multicast programming that stations are showing or planning to show; and (3) the stations’ perceptions of consumer interest in DTV and how broadcasters are promoting or planning to promote DTV to consumers. As of April 12, 2002, 24 percent of commercial television stations (298 of 1,240) had completed construction of DTV stations and were broadcasting a digital signal. Most Americans now have available to them an over-the-air signal from at least 1 DTV station, and many Americans living in larger television markets have several DTV signals available to them. Only 119 of the 298 current DTV stations were mandated to be broadcasting a digital signal before May 2002. Thus, some stations have elected to build their DTV stations and begin broadcasting a digital signal before they were required to do so. As for the progress of transitioning stations, we conducted interviews with representatives from a few transitioning stations and found them to be in various stages of building their DTV facilities. For example, one station had just begun planning for the construction of its DTV station and was currently analyzing its tower and equipment requirements, while another station was almost ready to start broadcasting a digital signal, several months before the deadline. Once on the air with a digital signal, broadcasters have been given some flexibility in determining how to structure their DTV services. A station can simply duplicate the programming shown on its analog channel by “converting” it to digital, or it can provide programming actually filmed in digital, which can include HDTV. A station also can choose to multicast or to take advantage of the ability of DTV to transmit text or data, such as stock quotes or electronic newspapers. There are concerns that if broadcast stations use their digital channel largely to duplicate the programming from their analog channel, consumers would have little incentive to purchase digital televisions sets and cable systems would have little incentive to carry broadcasters’ DTV channels. This lack of DTV adoption could delay the goal of having broadcasters vacate the spectrum in channels 52 to 69 by the end of 2006 to make the spectrum fully available for public safety and other uses. We asked current DTV stations what services they were offering over their digital spectrum, and they responded as follows: Seventy-four percent of current DTV stations are providing some amount of HDTV content on their digital broadcast channel. These stations reported an average of 23 hours of HDTV content per week. Affiliates of CBS—one of the biggest supporters of HDTV—reported providing more HDTV programming than affiliates of other networks. CBS affiliates of current DTV stations reported an average of 33 hours per week of HDTV programming. Affiliates of the smaller television networks (PAX, UPN, and WB) reported broadcasting no HDTV. Twenty-eight percent of current DTV stations said they are producing some of their own content in digital format, either in standard definition digital or high definition. In addition to offering high definition content, 22 percent of current DTV stations said that some of their programming included the multicasting of two or more programs simultaneously over their digital channel. We contacted a number of these stations to learn precisely what they were doing with their multicasting. We were told by several stations that they are providing a second 24-hour local weather channel or showing a local weather radar picture. Another station told us that they broadcast live feeds from several traffic cameras throughout the state to give current traffic and weather information. Our survey showed that within the first year of broadcasting in digital, commercial transitioning stations plan to do the following: Forty-three percent plan to show nothing more than content that has been converted from analog to digital. Thirty-four percent plan to provide some HDTV content. Eight percent plan to do some multicasting. Our finding that transitioning stations expect to show less digital content than current DTV stations are showing is likely because many of the current DTV stations are affiliates of the top four networks and are in major television markets. By contrast, transitioning DTV stations are more likely than current DTV stations to be unaffiliated with the top four networks or to be in smaller television markets. Unaffiliated stations have less access to the increasing amount of HDTV or other digital content that is provided by the major networks. Smaller stations sometimes rely more on syndicated shows—such as game shows, talk shows, or reruns of popular network programming—that are less likely to have been filmed in digital or high definition. Smaller stations also have fewer resources to buy the equipment necessary to film and produce their own digital content. In fact, only 10 percent of transitioning stations with annual revenues less than $2 million said that they expect to produce any digital content of their own within their first year of digital broadcasting. We asked current DTV stations to describe the overall interest level in digital broadcasts by the consumers in their markets. According to these stations, few consumers have a high interest in DTV. Seven percent of the stations said that consumers in their markets had no interest in DTV, and another 56 percent of the stations described overall consumer interest in their digital broadcasts as “low.” Stations that reported providing more high definition content did not report higher consumer interest than current DTV stations as a whole. Despite broadcasters’ perceptions of low consumer demand for digital and high definition television, only some of the current DTV stations reported undertaking promotion activities that have significant cost in order to promote or market their digital broadcasts. The two most prominent ways stations chose to promote their DTV channel—methods that do not involve great expense—were through a digital or high definition identifier running at the beginning of the program (52 percent) and by making information about digital programming available on the stations’ Web sites (50 percent). In addition to these methods, current DTV stations reported the following: Thirteen percent said they use advertising spots or promotions for specific shows available in high definition, and 22 percent said they advertise their DTV channel. Twelve percent said that their DTV channel is mentioned in the local television listings. Twenty-three percent said they do not promote their DTV channel. Compared with current DTV stations, only 6 percent of transitioning stations reported that they do not plan to promote their DTV channel. Thirty-five percent of transitioning stations said they plan to use advertising spots or promotions regarding their digital channel, 16 percent plan to advertise their high definition programming, and 33 percent said they plan to have their DTV channel mentioned in the local television listings. Transitioning stations answered our survey based on future plans and may or may not promote their DTV channels to the level they indicated on our survey. In this section, we address several issues related to the experiences of broadcasters to date in building their DTV stations. In particular, we discuss (1) the financial costs associated with building the DTV stations, (2) the problems stations reported experiencing (or the problems they expect to experience) in building the DTV stations, and (3) the various government reviews that are involved in building the DTV stations. Broadcasters must make large capital investments to begin broadcasting in digital, and many of the stations we surveyed reported problems in raising the necessary capital. We asked stations to report or estimate the approximate total cost they incurred or expect to incur in complying with the initial requirements for digital transmission—including expenses for a new tower or construction on an existing tower, transmission line, antenna, digital transmiters and encoders, consultants, licensing, and other capital expenditures. Our comparisons of reported costs by station types showed that the average reported costs per station among different types of stations were not dramatically different. Figure 1 shows that current DTV stations and larger stations on average reported somewhat higher costs than transitioning stations and smaller stations. For example, current DTV stations reported an average cost of $3.1 million per station to comply with the initial requirements for digital transmission, while transitioning stations reported an average of $2.3 million per station. Some of the lower cost reported by transitioning stations may be due to recent rule changes by FCC that were designed to reduce the amount stations must spend to meet the initial requirements for digital transmission. For example, FCC’s new rules allow stations to build less than maximum broadcast facilities. FCC staff said they believe that the costs of meeting the requirements for building DTV stations are significantly lower than the costs reported in our survey. The amounts reported to us may be due in part to stations’ reporting their actual costs of construction, even where construction exceeded FCC’s minimum requirements. At a recent meeting of the National Association of Broadcasters, one broadcaster said he was able to go on the air for approximately $125,000 and that most of the equipment he was using could be upgraded to higher power. However, another speaker stated that for some stations, FCC’s minimum requirements are not a long-term solution, particularly for stations that plan to show HDTV, and that upgrading the equipment at a later date could be problematic. Thus, some stations prefer to spend the money initially to build their DTV stations to exceed FCC’s minimum requirements for broadcasting a digital signal. Although there were no dramatic differences in the overall costs of building DTV facilities among various types of stations, our analysis of the reported overall expenditures as a percentage of station annual revenue did show considerable differences among various types of stations. For example, among current DTV stations the overall expenditures averaged 11 percent of annual revenues, while for transitioning stations the overall expenditures averaged 63 percent of annual revenues. For stations with annual revenues below $2 million (based on all stations), the overall expenditures averaged 242 percent of annual revenues. Thus, the overall cost of building the DTV stations appears to be more burdensome for some broadcasters than for others. Given the significant costs reported for getting a DTV signal on the air, it is not surprising that survey respondents cited funding as one of the most common problems they had experienced or expected to experience. Although the overall costs of building the DTV stations reported by broadcasters were fairly similar, the annual revenues of stations and the funding sources available to stations differed. Thus, the problem of obtaining funding did not appear to affect all stations equally. While 14 percent of the current DTV stations said they had experienced problems in the area of funding, 55 percent of transitioning stations reported funding as a problem. This difference raises concerns about the ability of some transitioning stations to pay for construction of their DTV stations and meet the May 1, 2002, deadline for broadcasting in digital. We asked stations what sources of funding they used or expected to use to pay for the building of their DTV stations. Almost half of all commercial stations that responded reported multiple sources of funding. As shown in figure 2, the most commonly cited source of funding for both current DTV stations and transitioning stations was funding from the station owner or parent company. Over 79 percent of current DTV stations had relied, in whole or in part, on their owner or parent company to provide money for their DTV construction. For transitioning stations, 62 percent reported obtaining some amount of funding from the station owner or parent company. This difference may be explained, in part, because stations with earlier DTV deadlines were more likely to have a large corporate parent, whereas transitioning stations are somewhat less likely to be owned by a large parent company. Regarding funding sources, survey respondents also reported the following: Transitioning stations were more than twice as likely to rely on debt financing than current DTV stations. Forty-three percent of transitioning stations reported that they had borrowed or planned to borrow money to fund the construction of their DTV stations, while 16 percent of current DTV stations said they had relied on debt capital. Six percent of the transitioning stations said they were considering sale of the station as a way to fund the DTV transition. We met with a representative of one TV station who said that sale of the station to a larger ownership group might be the only way for the station to fund its transition to DTV. Concerns about a reduction in the number of small, independent broadcasters serving local communities could arise should such sales actually take place. Seventeen percent of transitioning stations reported that they did not know how they would completely fund the construction of the DTV station. This uncertainty raises concerns about whether these stations will be able to broadcast a digital signal on time, given that our survey was conducted only 7 months before the May 2002 deadline. We also asked stations to select the primary funding source from the funding sources that they reported using. Both current DTV stations and transitioning stations most often named funding from the station owner or parent company as their primary funding source. However, 72 percent of current DTV stations said funding from the station owner or parent was their primary source, while 45 percent of transitioning stations listed a station owner or parent as the primary source. Eleven percent of transitioning stations reported that they did not know what would be their primary funding source. Some industry executives noted that there are ways to mitigate the initial costs of the DTV transition. We asked about some of these methods in our survey, and the responses showed the following: Some stations reported sharing a broadcast tower with other stations, which can be less expensive than having an individual, private tower. However, we found no relationship between reported shared tower use and reported lower average overall cost for construction of the DTV station. We were told by industry executives that a temporary or “side-mount” antenna can be less expensive to mount on the broadcast tower and can be used to delay the construction of a new tower. Twenty-seven percent of current DTV stations reported having an antenna in a temporary location; 26 percent of transitioning stations said that they plan to temporarily install an antenna. FCC currently allows a broadcaster to transmit a DTV signal at less than full power. For the broadcaster, this can save money on equipment purchases and energy bills. However, broadcasting at less than full power can reduce the effective market coverage and mean that fewer consumers can receive the over-the-air digital signal. Forty-five percent of current DTV stations reported that they are operating at less than full power and full market coverage, and 50 percent of transitioning stations told us they plan to operate at less than full power when they begin broadcasting in digital. One of the key physical facilities that broadcasters must have in place is the broadcast tower, which supports the digital antenna. We were told by industry executives that some broadcasters can mount the digital antenna on their current analog tower. However, other broadcasters need to increase the height of or reinforce their current tower, while still others must construct an entirely new broadcast tower on which to install their digital antenna. We asked stations what changes they required or expected to require from their existing analog towers. There was great variance among the stations in the need for tower work. While 18 percent of current DTV stations and 20 percent of transitioning stations reported being able to use their current tower without modification, 21 percent of current DTV stations and 25 percent of transitioning stations reported that they needed to build an entirely new tower. Once a station determines its tower needs, it can run into various problems related to constructing the broadcast towers and other facilities needed for DTV transmission. One of the most commonly cited problems among all stations, for example, was weather. Frozen ground, wind, and snow can cause complications in tower work, particularly in the northern states during the winter months, and can lead to delays in DTV construction schedules. Of the stations answering our survey, 41 percent of current DTV stations and 57 percent of transitioning stations cited the weather as a problem that had arisen or that was expected to arise. We spoke with representatives of three tower crew companies who told us that certain types of weather require tower work to be delayed. The tower crew company representatives noted that wind is a particular problem in tower work because the wind patterns above 1,000 feet can be significantly stronger than at ground level, making the work too difficult and dangerous to attempt. We also asked one of the tower crew representatives if the May deadline—following winter—created more problems with regard to weather. The tower crew representative told us that a fall or winter deadline may have been better because May through October were the best months for tower work and tower construction. Another concern noted by many broadcasters—again related to tower work—was “manpower availability.” The digital transition has caused many stations to be requiring tower work within a short time period. Broadcasters said that there are a limited number of tower crews in the United States that are qualified to do the type of work involved in constructing or reinforcing broadcast television towers and mounting broadcast antennas. According to our survey, 30 percent of current DTV stations and 56 percent of transitioning stations cited manpower availability as a problem area or expected problem area. Despite these views by broadcasters, we were told by representatives of the three tower crew companies that, although they are currently busy and have a significant amount of tower worked scheduled for the next few months, they do not feel overwhelmed by work related to the installation of digital antennas and are generally able to provide the services requested by broadcast stations. Broadcasters reported various other problems with building DTV stations, as shown in figure 3. We also examined whether any governmental reviews were necessary during the DTV transition process and, if so, whether such reviews had been the cause of any delays for the stations. Generally, these issues fell under the licensing or review authority of various government entities. Specifically, we asked stations if issues had arisen or were expected to arise regarding the following: (1) review, permitting, or processing by FCC; (2) review or permitting by local authorities; (3) environmental review by state or local authorities; (4) review by the Federal Aviation Administration (FAA); (5) review by the Bureau of Land Management; (6) review by the National Park Service; and (7) coordination with Canadian or Mexican governments. We also asked stations whether the review took longer or was taking longer than they anticipated and whether lengthy reviews or permit processing was considered a problem area. In general, the stations responded as follows: Some stations reported needing multiple reviews by various governmental agencies. For example, 15 percent of current DTV stations and 30 percent of transitioning stations told us they required reviews by three or more government entities. Stations located near a border with another country may require a review by Canadian and Mexican governments for coordination. Of the stations that reported they required such a review, 50 percent of current DTV stations and 73 percent of transitioning stations said the process of getting necessary approvals from Canadian or Mexican authorities had taken longer than they expected. Of the transitioning stations needing Canadian or Mexican review, 65 percent reported they had yet to resolve the international coordination issues. Review by FAA—which often must approve changes to the height of an existing tower or the construction of a new tower, in coordination with FCC—was noted by 19 percent of current DTV stations. Of those, 32 percent said the issue took longer than expected. For transitioning stations, 25 percent reported having or expecting to have an FAA review. In this section, we address several issues related to the progress of transitioning stations in meeting the May 2002 deadline to be on the air with a digital signal. In particular, we discuss (1) the number of transitioning stations that reported they have had problems or expect problems that might keep them from meeting the deadline; (2) the lengths of extensions to the deadline that stations reported would be realistic for their situations; and (3) the dates when stations reported they would have built DTV stations if the transition were based on market forces rather than government mandate. Seventy-four percent of transitioning stations told us that they had problems or expected problems that might keep them from meeting the May 1, 2002, deadline for having a digital signal on the air. In particular: Eighty-five percent of transitioning stations with annual revenues of less than $2 million reported that they had problems that might keep them from meeting the May 2002 deadline. Eighty-four percent of transitioning stations outside of the largest 100 television markets reported that they had problems that might keep them from meeting the May 2002 deadline. Stations that said they might not meet the deadline reported higher incidences of all types of problems. Funding was the most common problem, cited by 66 percent of stations that might not meet the deadline (funding problems were noted by 26 percent of stations that do not expect problems with meeting the deadline). In addition, of the stations that may not meet the deadline, 64 percent reported problems with manpower availability, 55 reported problems with equipment availability, 59 percent reported weather-related problems, and 45 percent reported lengthy permit or review problems. In contrast, of the transitioning stations that do not expect problems with meeting the deadline, 34 percent reported problems with manpower availability, 24 percent reported problems with equipment availability, 49 percent reported weather-related problems, and 26 percent reported lengthy permit or review problems. Station network affiliation and size of the station owner (based on how many broadcast stations the owner held) had little relationship to whether a station expected problems with meeting the deadline. In March 2002, FCC closed an application period for stations with a May 2002 deadline to apply for extensions of time to construct their digital stations. FCC is handling the stations’ applications on a case-by-case basis. FCC allowed stations that applied for an extension to note technical, legal, financial, or other reasons (e.g., natural disaster) for the extension request. Applicants had to show support for the reasons given and mention steps taken to solve or mitigate the problems. In our survey of broadcasters, we asked whether stations should be required to show a “good faith effort” in meeting the deadline before being granted an extension. Seventy percent of current DTV stations and 52 percent of transitioning stations thought that a station should be required to show a good faith effort. As of April 3, 2002, FCC had received applications for extension from 810 commercial stations and had granted 476 of these stations a 6-month extension. FCC granted extensions for more than 200 of these stations on the basis of technical problems alone (e.g., equipment delays). Over 180 stations were given extensions that were based on some combination of technical, legal, financial, or other reasons. No stations were granted extensions that were based solely on financial reasons. The 334 stations not initially granted an extension were sent Letters of Inquiry by FCC in order to obtain more specific information. FCC staff said that many of the letters sought more financial and technical information with respect to finalizing DTV construction plans and that most of the letters gave stations 15 days to respond. In our survey of broadcasters, we asked transitioning stations to estimate a “realistic extension” if FCC were to extend its deadline for them to be on the air with a digital signal. In general, smaller stations were most likely to believe that an extension of more than 2 years was realistic for them. Of the transitioning stations that expected problems with meeting the May 2002 deadline, only 19 percent considered an extension of 6 months or less to be sufficient, while 54 percent said that an extension of 2 years or more was realistic for their situation. Under commission standards, FCC staff may grant up to two extensions, each not to exceed 6 months. Further requests by a station for an extension of its DTV deadline would have to be granted at the commission level. From the responses to our survey, it appears that the 6-month extensions that FCC has granted so far may be insufficient for many transitioning stations and that additional rounds of applications for extension appear likely. We asked broadcasters to estimate when they would likely have begun broadcasting a digital signal—assuming they had been given the spectrum but were not under any government deadline to transition to digital—on the basis of market forces such as competition, technology, and viewer demand. While many current DTV stations said they would have broadcast digitally by the end of 2002, most transitioning DTV stations reported they would have begun broadcasting digitally much later, as shown in figure 4. A small percentage of stations reported that without a government mandate, they never would have chosen to transition to digital technologies. The digital television transition timeline established by FCC included an ambitious construction schedule for DTV stations. The level of difficulty in readying digital broadcasting facilities that was reported to us by transitioning stations indicates that many stations will have problems meeting the timeline. But even after construction of all DTV stations, only part of the DTV transition will have been completed. Because of the 85 percent rule (i.e., the requirement that 85 percent of households in a market be able to receive a digital signal before the analog signals are discontinued), much of the spectrum is likely to remain encumbered by analog broadcast stations until consumers adopt the necessary digital technologies. According to our survey, however, broadcasters currently perceive little consumer demand for digital and HDTV programming. Nonetheless, it appears that the broadcasters will move forward—some more slowly than others—with building the DTV stations. Other market participants—cable and satellite companies, content providers, consumer electronics manufacturers, and others—also play important roles in influencing the speed of the DTV transition. FCC recently addressed the role of these other market participants as well as that of the broadcasters in a letter from Chairman Michael K. Powell to Senator Ernest F. Hollings and Representative W.J. “Billy” Tauzin. In the letter, Chairman Powell proposed voluntary industry actions to speed the DTV transition by calling for the provision of more HDTV or other value-added DTV programming, more cable carriage of DTV channels, the provision of cable set-top boxes that allow for the display of HDTV programming, and the inclusion of over-the- air DTV tuners into almost all new television receivers by the end of 2006. If embraced by the industry, these actions could help to keep the DTV transition on track since their combined effect would be to encourage consumers to adopt DTV technologies. We will examine these critical issues in our next report on the digital television transition, which we expect to issue in November 2002. We provided a draft of this report to FCC staff for their review and comment. FCC staff believes that its Memorandum Opinion and Order on Reconsideration, which was adopted on November 8, 2001, may have a substantial impact on certain survey responses made before that date. In the order, the commission modified its rules to permit stations to adopt a more graduated approach to providing DTV service, initially operating with lower powered—and therefore less expensive—DTV facilities, while retaining the right to expand their coverage area as the transition continues to progress. We added information related to FCC’s order in this report. FCC staff also provided technical comments that were incorporated throughout this report as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days after the date of this letter. At that time, we will send copies to interested congressional committees; the chairman, FCC; and other interested parties. We will also make copies available to others upon request. If you have any questions about this report, please contact me at 202-512-2834 or guerrerop@gao.gov. Key contacts and major contributors to this report are listed in appendix VIII. To provide information on the progress in building digital television (DTV) stations and on broadcaster problems and concerns, we mailed surveys to all full-power, on-the-air commercial and public broadcasting stations. To develop our survey questions, we interviewed officials at the Federal Communications Commission (FCC) as well as officials of organizations representing industries affected by the transition to DTV. We also reviewed relevant documents, such as FCC orders and proposed rulemakings. We then conducted pretests with several individual broadcast stations to help further refine our questions, identify any unclear portions of the survey, and identify any potentially biased questions. These pretests were conducted in-person and by telephone with stations covering a number of different metropolitan areas. Two versions of our survey were developed, one for stations that had begun broadcasting a digital signal (“current DTV stations”) and one for stations that had not begun broadcasting a digital signal (“transitioning stations”). The survey questions and detailed survey results for commercial stations are contained in appendixes IV and V, and the questions and results for public stations are contained in appendixes VI and VII. To provide the population information, we acquired the MEDIA Access Pro database of BIA Financial Network, Inc., which is a private firm that specializes in broadcast industry data. This database provided us with the names, addresses, and other contact information of broadcast stations as well as information on such things as station size and ownership, station revenues, market size, and station operating status. We also used this database to determine which stations were commercial and which were public as well as which were broadcasting in digital and which were not. The digital broadcasting status listed in BIA’s database was combined with information from the National Association of Broadcasters’ Web site to determine which version of the survey to mail to each station. We mailed surveys to all full-power commercial and public television stations on the air at the end of September 2001. We sent a different survey to any station that indicated that we had misclassified its digital broadcasting status. As such, some of the stations that filled out the survey for digital stations began that service after September 2001. Our survey was not sent to low- power commercial broadcast stations because these stations have not been required by FCC to transition to digital technologies. In addition, the survey was not mailed to the eight stations in New York City whose broadcast towers atop the World Trade Center were destroyed in the September 11 terrorist attacks. Instead, we spoke directly to a representative of each of these stations to gather information about how the events of September 11 affected their operations generally and affected their DTV plans in particular. A discussion of the situations of the stations in New York City is provided in appendix II. We also adjusted the survey population to exclude the few stations that (1) had recently gone off the air, (2) indicated that they were not assigned a digital channel, or (3) were broadcasting outside of the United States. The resulting population to which we sent surveys consisted of 1,182 full-power commercial television stations and 372 full-power public television stations on the air at the end of September 2001. We first mailed our survey in early October 2001. However, on October 15, 2001, all incoming mail to our headquarters was halted due to the receipt of letters containing anthrax by several federal agencies in the Washington, D.C., area. We received no U.S. mail for more than 2 months. On December 27, 2001, we conducted a second mailing of the survey to all stations from which we had not received a survey response before October 15, 2001. This time, surveys were mailed from and returned to our Boston field office. The second mailing went only to commercial stations due to time constraints on the research phase created by the mail stoppage. For commercial stations that may have completed and returned the survey twice, only the original survey from the October mailing was analyzed. We made a third and last attempt to contact the commercial station nonrespondents in telephone reminder calls during the first 2 weeks of March. These telephone contacts resulted in an additional 237 questionnaires from late respondents (approximately 27 percent of all commercial responses). Of the population of broadcast stations, we received 1,036 of 1,554 usable questionnaires, for an overall response rate of 67 percent. We received 135 of 168 surveys from commercial current DTV stations (80 percent response rate) and 15 of 37 surveys from public current DTV stations (41 percent response rate). We received 727 of 1,014 surveys from commercial transitioning stations (72 percent response rate) and 159 of 335 public transitioning stations (47 percent response rate). We conducted two types of analyses of commercial stations to evaluate the possibility that the respondents might differ from nonrespondents. Although there is some evidence of differences, these are not sufficiently consistent nor large enough to provide a basis for adjusting our survey responses. The first type of analysis directly compared two measures of the size of the responding and nonresponding stations. The nonrespondents tended to be from larger markets and from larger ownership groups. For the transitioning stations, for example, 40 percent of the nonrespondents and 31 percent of the respondents were from the top 50 markets. The second analysis compared the early responses (sent between October and February) with the later responses (sent in March). The evidence was mixed as to whether the earlier or later responding stations might have more difficulties in meeting the DTV station deadline. On the one hand, transitioning early respondents were less likely than transitioning late respondents to give the direct assessment that they might have problems in meeting the deadline (73 percent and 80 percent, respectively). On the other hand, early respondents were more likely than late respondents to report experiencing specific types of digital transition problems. For example, a funding problem was reported by 19 percent of digital early respondents versus 5 percent of digital late respondents, and 62 percent of transitioning early respondents versus 43 percent of transitioning late respondents. No definite pattern emerges from these findings, and it is unclear whether differences are due to actual differences in the station characteristics of early and later respondents or to the differing proximity to the deadline for broadcasting a digital signal. Questionnaires were mailed to station managers but were completed by station managers, station engineers, or officials of the station owner or parent company. All returned questionnaires were reviewed, and we called respondents to obtain information where clarification was needed. All data were double keyed and verified during entry, and computer analyses were performed to identify any inconsistencies or other indications of errors. Because the questionnaires were mailed to all broadcast stations in the appropriate population, percentage estimates do not have sampling errors. Other potential sources of errors associated with the questionnaires, such as question misinterpretation and question nonresponse, may be present. This appendix focuses on the special circumstances of the broadcast television stations in New York City following the terrorist attacks of September 11, 2001. We did not mail our survey to the New York City stations, but instead conducted telephone interviews with the stations in December 2001 and January 2002. Since the attacks, the eight local broadcasters whose antennas and other equipment were located atop the twin towers of the World Trade Center have struggled to restore over-the- air service to their viewers. Of these eight stations, six said they had completed building their DTV stations by September 11 and were broadcasting digital signals. All six digital antennas were also lost in the collapse of the World Trade Center. All of the broadcasters with whom we spoke stated that restoration of their full-power analog signals was their highest priority. The destruction of the World Trade Center buildings on September 11, 2001, also destroyed the antennas, transmitters, and associated equipment of eight broadcast television stations. Lacking immediate backup transmission equipment or other immediate contingency plans, the stations ceased over-the-air broadcasts entirely on September 11. Several of the stations had direct fiber links to some or all of the cable systems on which their analog signal was carried; thus these stations were able to continue providing a signal to some portion of their cable viewers on September 11. In addition, several of the broadcasters were carried on the direct broadcast satellite (DBS) systems of DirecTV and EchoStar and, although in some cases the broadcast signals were momentarily disrupted, local broadcast channel service continued for satellite subscribers on September 11. Within 10 days of September 11, most of the stations said they had resumed over-the-air broadcasting from a temporary tower in Alpine, N.J. However, the broadcasters considered the move to Alpine a short-term solution. Station executives with whom we spoke said that, although they were pleased that the site was immediately available, they were disappointed to discover that signal weakness from the site meant that only about 70 percent of their viewers could be reached. Some stations attempted to increase coverage by arranging for additional cable providers to carry their signals via fiber links. However, we were told that large numbers of viewers—particularly in Brooklyn and Queens—do not subscribe to cable service. Station executives told us that fully restoring their over-the-air analog broadcasts was of the highest priority. It is the analog signal—not the digital signal—that the stations count on for their revenue stream. One executive estimated that his station’s signal is still lost to over 3 million viewers since September 11. We were told that it will take up to 3 years to achieve full analog signal restoration because each station must repair transmission lines, install new antennas, acquire backup generators, and negotiate for temporary and permanent space on rooftops and towers. An immediate need for the broadcasters was to negotiate the terms of placing antennas, transmitters, and other equipment atop the Empire State Building, which was the favored temporary location due to its more than 100-story height. By mid-October, nearly every local TV station had begun to broadcast from the roof of the Empire State Building. Although the Empire State Building seemed initially to be an effective substitute for the World Trade Center location, unanticipated constraints arose, including limited physical space, an aging infrastructure, and the lower height (as compared with the World Trade Center). First, we were told that space on the Empire State Building’s rooftop is severely limited. Many of New York City’s radio stations have broadcast from the rooftop for decades, and, as the rooftop is currently configured, there is little room for the TV stations to install new broadcasting equipment. Second, the station representatives said that television broadcasting is limited by engineering constraints related to the Empire State Building’s aging infrastructure. Unlike the World Trade Center buildings, the Empire State Building’s infrastructure is more than 70 years old. While considered a safe building for workers, it is nonetheless a fragile physical plant on which to place the amount of broadcasting equipment required by eight television stations. The aging infrastructure also creates wiring and powering issues. One station representative said that it simply may not be possible to wire the Empire State Building to power the necessary antennas, transmitters, and associated equipment. Third, the broadcasters used antennas perched on a 343-foot tower rising from a 110- story base when they were using the World Trade Center. The Empire State Building offers either operation from a 200-foot tower atop the somewhat lower roofline or operation from the 81st floor. In either case, nearby Manhattan buildings—even the Empire State Building itself—cause interference with the stations’ signals and prevent reception for some viewers. One executive we spoke with believed that the Empire State Building would serve more effectively as a backup transmission location. Another executive noted the importance of this backup role, since the events of September 11 so dramatically demonstrated the need for “transmission redundancy.” Broadcasting at less than their accustomed levels of power and sharing limited space at and near the Empire State Building’s roof, stations have continued to experience difficulty in reaching their entire audience. The Alpine site also reduces stations’ market coverage. Reaching Brooklyn and Queens has been particularly problematic because viewers in these boroughs must contend with signals that are weakened or blocked by a variety of Manhattan obstructions. One station executive told us that he is currently reaching only about 80 percent of his viewers citywide. Concomitant with securing temporary tower space at the Empire State Building is the stations’ need to find a new permanent location for their antennas. Currently, industry stakeholders are negotiating to select a location that is acceptable to all parties. Station executives with whom we spoke said that their preference is Governor’s Island, which is currently owned by the federal government and located in New York Harbor near lower Manhattan.The station executives consider the island to be a nearly optimal location because it is unused, virtually vacant, and lacks private residents who might object to the construction of broadcast towers. In addition, use of the island would allow all stations to be located together, thus obviating the need for each station to secure its own space in Manhattan proper. While station executives attempt to secure permanent broadcasting space, they must grapple with a range of budget and finance matters. We were told that, in particular, stations are dealing with (1) ensuring redundancy in equipment placement, which requires negotiating twice for building rent, consulting services, and other key purchases; (2) seeking reimbursement from insurers for losses directly attributable to the events of September 11; and (3) retaining high-quality programming so that affected viewers will return when the analog signal is fully restored. Station executives with whom we spoke emphasized the need for “redundancy” of their broadcast signal as a precaution against future terrorist attacks, natural disasters, or other calamities. This redundancy requires stations to invest at least twice the amount that would be required simply to replace the equipment that was destroyed on September 11, contract twice for the services of design and engineering consultants, and seek permits and negotiate rent at two distinct locations. In addition, stations are currently negotiating with their insurance companies to determine precisely what is reimbursable in the wake of the September 11 events. One station executive told us that his “complex claim” could surpass $30 million. Although some of this amount represents lost hardware, he said, some of it represents a request for reimbursement of lost revenue. The amount of lost advertising revenue is difficult to estimate and insurers have argued that the events of September 11 are not the sole cause of lower advertising revenues. Six of the eight stations had completed building their DTV stations before September 11, 2001, and all six lost their digital antennas. The other two stations were in the process of building their DTV infrastructure. Station executives with whom we spoke said that restoring full traditional analog service was their immediate priority. However, one executive noted the importance of regaining digital broadcasting capabilities in the long term. Ultimately, he said, viewers will want high definition content and other digital services. Another station executive mentioned that reacquiring digital capability was essential to recoup earlier financial investments in digital technologies. The eight stations’ reported costs-to-date on the digital transition ranged from $250,000 to $27 million. The station executives reiterated their commitment to high definition content, although they acknowledged that the viewers of New York City had yet to express widespread interest in HDTV. However, the executives anticipated that this will change as equipment prices decline and as HDTV is more aggressively promoted in coming years. Before September 11, 2001, local stations were broadcasting sports (such as the games of the New York Mets), cultural programs (such as Live from the Met), children’s shows, nature shows, and other special programming in high definition format. One station had plans for a high definition broadcast of the Tournament of Roses Parade on January 1, 2002. Stations with whom we spoke were unable to estimate precisely when they might have their digital signals back on the air, although one station was hoping to be broadcasting a limited digital signal by May 2002. We were told by the station executives that, in the wake of the World Trade Center attacks, FCC was cooperative, supportive, and accommodating—a full partner in helping to restore broadcast television service to the New York metropolitan area. Specifically, according to the station executives, FCC offered temporary licenses, facilitated stations’ moves to Alpine and the Empire State Building, and issued necessary waivers. One station executive said that FCC acknowledged and approved requests “in minutes, rather than days or weeks,” while another expressed appreciation that FCC had granted it temporary permission to file requests electronically. This executive expressed satisfaction with the proactive nature of FCC’s involvement, noting that an FCC official called him within a day of the World Trade Center attacks to ask how the agency might facilitate the rebuilding process. The executives felt that other federal, state, and local government agencies have been similarly cooperative, including the U.S. Department of Commerce, the Federal Emergency Management Agency, the Federal Aviation Administration, and the Port Authority of New York & New Jersey. The federal mandate that all full-power broadcast television stations must transition to digital technologies also applies to the nation’s 380 public television stations. FCC has ordered that these stations have a digital signal on the air by May 1, 2003. We mailed surveys to all full-power, on- the-air public stations. Public stations were sent the same survey as commercial stations. Just as with the commercial stations, they were sent one version of the survey if they were already broadcasting a digital signal (“current DTV stations”) and another version of the survey if they had not begun broadcasting a digital signal (“transitioning stations”). We did one mailing to the public stations in early October 2001. For more information on our response rates, see appendix I. Our survey responses from public stations were often similar to the responses of commercial stations. In this appendix, we report mostly on areas where survey results differed from those of the commercial stations. See appendixes VI and VII for complete results from the public stations. As of April 12, 2002, according to FCC, there were 60 public stations on the air with a digital signal. Costs reported for the digital transition were $3.0 million for public current DTV stations and $2.6 million expected for public transitioning stations. Again, the costs are not dramatically different (the costs for commercial stations having been $3.1 million for current DTV stations and $2.3 million for transitioning stations). One of the biggest differences between public and commercial stations was the reported funding sources for building the DTV station. Commercial stations often relied on funding from the corporate parent or owner. For public stations, the most reported funding sources were state government funding, station cash reserves, federal funding or grants from the National Telecommunications and Information Administration, and fund-raising or private grants. Both public current DTV stations and transitioning stations reported that they relied heavily on state government funding sources. Current DTV stations also reported relying heavily on station cash reserves. The public stations that had already gone on the air with a digital signal— all of which chose to do so ahead of the schedule set by FCC—reported that they were often providing their viewers with high definition content. Eighty percent of public current DTV stations said they were offering some HDTV programming. Many had their digital signal on the air constantly; the stations averaged 50 hours per week of HDTV content and 66 hours per week of multicasting. Two-thirds of current DTV stations said they were producing some of their own content in digital. Of the transitioning stations, most had various plans for their digital channel, including 84 percent that said they planned some amount of HDTV and 73 percent that said they planned some amount of multicasting. Fifty-three percent plan to produce their own content in digital. As for problems that the public stations were experiencing or expecting, funding ranked as the most reported problem. Similar to the commercial stations, funding was said to be a problem by 76 percent of transitioning stations. Weather was reported as a problem by 57 percent of transitioning stations, and lengthy permit reviews were reported by 30 percent of transitioning stations. Another difference from the commercial stations was the number of public transitioning stations that said they might not make the deadline for broadcasting a digital signal. While 74 percent of commercial stations said this, 45 percent of public stations said this. It is likely that the additional year given to public stations in the FCC’s schedule partly explains the lower number of public stations that think they will fail to meet their deadline. There were also differences in the extensions that public stations felt they might need from FCC. Thirty-eight percent of public stations said they would need an extension of 2 years or more, compared with 54 percent of commercial stations that said this. Lastly, public stations were more optimistic about when they would have had a DTV signal on the air had they not been given a timeline. Fifty-two percent of public transitioning stations said they would have been on the air in digital by 2006 (compared with 46 percent of commercial transitioning stations). In addition to those named above, Naba Barkakati, Jason Bromberg, Aaron Casey, Michael Clements, Michele Fejfar, James M. Fields, Rebecca Medina, Christopher Miller, Emma Quach, Kevin Tarmann, Thomas Taydus, Madhav Panwar, Mindi Weisenbloom, and Alwynne Wilbur made key contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
U. S. broadcast television stations are now switching from analog to digital television (DTV). The transition to digital technologies was sought by many broadcasters and was mandated by Congress and the Federal Communications Commission (FCC). FCC established 2006 as the target date for ending analog transmissions---a deadline later codified by Congress. At least 24 percent of all commercial television stations are now broadcasting a digital signal. However, these stations report little interest in receiving DTV. Transitioning stations reported that funding was one of the most prevalent problems. Seventy-four percent of transitioning stations indicated that the problems they are facing are so significant that they may not be able to begin broadcasting a DTV signal by May 2002, as required. Sixty-eight percent of transitioning stations said that a realistic extension for them would be one year or more. Thirty-one percent of the transitioning stations that said they might miss their May 2002 deadline reported that, if the transition were driven by market forces such as competition, technology, and consumer demand, they likely would not be on the air with a digital signal until after 2010. Another four percent of these stations reported that without a government mandate, they likely would never transition to digital.
This section provides information on the Corps’ organizational structure, its project operations and water control manuals, and the process for formulating its operations and maintenance budget. Located within the Department of Defense, the Corps has both military and civilian responsibilities. The Corps’ civil works program is organized into three tiers: a national headquarters in Washington, D.C.; eight regional divisions that were established generally according to watershed boundaries; and 38 districts nationwide (see fig. 1). Corps headquarters primarily develops policies and provides oversight. The Assistant Secretary of the Army for Civil Works, appointed by the President, establishes the policy direction for the civil works program. The Chief of Engineers, a military officer, oversees the Corps’ civil works operations and reports on civil works matters to the Assistant Secretary of the Army for Civil Works. The eight divisions, commanded by military officers, coordinate civil works projects in the districts within their respective divisions. Corps districts, also commanded by military officers, are responsible for planning, engineering, constructing, and managing water-resources infrastructure projects in their districts. Districts are also responsible for coordinating with projects’ nonfederal sponsors, which may be state, tribal, county, or local governments or agencies. In 1969, the Corps formed the Institute for Water Resources—which is a field-operating activity outside of the headquarters, division, and district structure—to provide forward-looking analysis and research in developing planning methodologies to aid the civil works program. Specifically, the institute fulfills its mission, in part, by providing an analysis of emerging water resources trends and issues and state-of-the-art planning and hydrologic-engineering methods, models, and training. In 2009, the Corps established the Responses to Climate Change program under the lead of Institute for Water Resources to develop and implement practical, nationally consistent, and cost-effective approaches and policies to reduce potential vulnerabilities to the nation’s water infrastructure resulting from climate change and variability. The Corps is responsible for operations at 707 dams that it owns at 557 projects across the country, as well as flood control operations at 134 dams constructed or operated by other federal, nonfederal, or private agencies. Each of these projects may have a single authorized purpose or serve multiple purposes such as those identified in the original project authorization, revisions within the discretionary authority of the Chief of Engineers, or project modifications permitted under laws enacted subsequent to the original authorization. For example, the Blackwater Dam in New Hampshire has the single purpose of flood control, whereas the Libby Dam in Montana has multiple purposes, including hydropower, flood control, and recreation. These 841 dams and their reservoirs are operated according to water control manuals and their associated water control plans, which Corps regulations require to be developed. A water control manual may outline operations for a single project or a system of projects. For example, the Missouri River Mainstem Reservoir System Master Water Control Manual outlines the operations at six dams and their associated reservoirs, and the Folsom Dam Water Control Manual applies to one dam and its reservoir. Water control manuals include a variety of information the Corps uses in operating the dams, including protocols for coordinating with and collecting data from federal agencies, such as NOAA’s National Weather Service and USGS, as well as water control plans. The water control plans, sometimes referred to as chapter 7 of the water control manuals, outline how each reservoir is to be operated and include relevant criteria, guidelines, and rule curves defining the seasonal and monthly limits of storage and guide water storage and releases at a project. According to the Corps’ engineer regulations, the Corps develops water control plans to ensure that project operations conform to objectives and specific provisions of authorizing legislation. Water control plans also generally describe how a reservoir will be managed, including how water is to be allocated between a flood control storage pool and a conservation storage pool, which is used to meet project purposes during normal and drought conditions. The bottom of a conservation storage pool is considered inactive and is designed for collecting sediment (see fig.2). Water levels in the pools are defined based on a statistical analysis of historical rain events. For those projects that have multiple authorized purposes, water control plans attempt to balance water storage for all purposes. Corps engineer regulations require that all water control manuals—except manuals for dry reservoirs that do not fill with water unless floodwaters must be contained—have an associated drought contingency plan to provide guidance for water management decisions and responses to a water shortage due to climatological drought. These plans, which can cover more than one project: (1) outline the process for identifying and monitoring drought at a project, (2) inform decisions taken to mitigate drought effects, and (3) define the coordination needed with stakeholders and local interests to help manage water resources so they are used in a manner consistent with the needs that develop, among other things. According to a 2014 Corps engineer regulation, water control manuals may be revised for reasons such as land use development in the project area and downstream from it, improvements in technology used to operate projects, reallocation of the water supply, new regional priorities, or changes in environmental conditions. The Corps’ engineer regulation also directs districts to include in water control manuals a provision allowing temporary deviations from a project’s approved water control plan to alleviate critical situations, such as a flood or drought, or to realize additional efficiencies without significantly affecting the project’s authorized purposes. Districts are to perform a risk and uncertainty analysis to determine the potential consequences of such a deviation. Division commanders are responsible for reviewing and approving any proposed deviations. According to the engineer regulation, deviations are meant to be temporary and, if a deviation lasts longer than 3 years, the water control manual must be revised. Our prior work has found that the Corps’ headquarters, divisions, and districts are all involved in developing the President’s budget request for the Corps. The development process spans 2 years; for example, development of the fiscal year 2018 budget began in fiscal year 2016. After receiving budget guidance from the Office of Management and Budget as well as the Assistant Secretary of the Army for Civil Works, district staff compile a list of operations and maintenance (O&M) projects necessary in their districts and submit their needs to the relevant division. O&M projects may include, among other things, water control manual revisions, dredging, replacement of dam parts, dam safety measures, or adding capacity at hydropower projects. Division staff then rank the O&M projects from all districts in the division and submit those rankings to Corps headquarters staff for review. Headquarters staff review the rankings to help ensure they are consistent with Corps-wide guidance and result in decisions that emphasize agency-wide priorities. Headquarters staff consolidate the O&M requests across business lines and divisions into a highest-priority grouping. Once the Corps completes its internal review of the budget request, the Assistant Secretary of the Army for Civil Works approves and submits its budget to the Office of Management and Budget for review. The Office of Management and Budget recommends to the President whether to support or change the Corps’ budget request, and the President’s budget request is transmitted to Congress. According to agency officials, the Corps conducts ongoing, informal reviews of selected water control manuals and has revised some of them, but the extent of the reviews and revisions is unclear because they were not documented or tracked. More specifically, district officials said that the Corps reviews the manuals as part of daily operations but does not document the reviews, and there is no guidance on what constitutes a review or how to document it. Further, the Corps does not track consistent information across divisions on the status of manuals to indicate revisions that were made or are needed. It is unclear to what extent the Corps has reviewed its water control manuals because district officials did not document these reviews, which, according to district officials, are informal and conducted on an ongoing basis through daily operations. A 2014 Corps engineer regulation states that water control manuals should be reviewed no less than every 10 years, so that they can be revised as necessary. Most district officials we interviewed said that they informally review the water control plan because this portion of the manual describes how projects are to be operated under different conditions to meet their authorized purposes. However, officials we interviewed from all 15 districts said they do not document these informal reviews because they consider such reviews to be part of the daily routine of operating projects. Because these informal reviews are not documented, knowledge of these reviews and their results may be limited to personnel directly involved with them. Officials we interviewed from four districts said that the loss of institutional knowledge posed a challenge to conducting efficient reviews of manuals. For example, officials from one district said that no Corps officials currently employed at the district had worked on developing the manual for a project and had no supporting documentation of the process, so the officials did not know why prior Corps officials wrote the manual in a particular way. As a result, the officials said it took them longer to review the manual. One Corps district we reviewed had previously documented informal reviews of water control manuals. Specifically, officials we interviewed in this district said that they documented reviews of some water control manuals in 2005 as part of a district-wide effort to ensure these manuals were adequate to meet the projects’ authorized purposes since they had not been revised in a long time. According to these officials, as part of this effort, if they determined that all of the operating conditions in a manual were still current, they submitted a memorandum to their division that revalidated the manual’s water control plan. Officials from that district said they have not documented reviews of water control manuals since 2005 because they chose to focus only on those manuals they knew needed revision. However, the Corps does not have guidance on what activities constitute a review or how officials should document the results of their reviews. Under federal standards for internal control, internal control and other significant events are to be clearly documented in a manner that allows the documentation to be readily available for examination, such as in management directives, administrative policies, or operating manuals. Without developing guidance on what activities constitute a review of a water control manual and how to document that review, the Corps does not have reasonable assurance that its districts will consistently conduct reviews and document them to provide a means to retain organizational knowledge and mitigate the risk of having that knowledge limited to personnel directly involved with these reviews. The Corps has revised some water control manuals; however, divisions and districts do not track consistent information about revisions to manuals, and the extent to which they have been revised—or need revision—is unclear. Corps engineer regulations state that manuals are to be revised as needed, in accordance with the regulations. Districts have revised some water control manuals for a variety of reasons, such as in response to infrastructure modifications and weather events, according to the Corps’ documents and its headquarters, division, and district officials we interviewed. For example, officials we interviewed in one district said they revised a water control manual after a flood highlighted a need to change the seasonal and monthly limits of reservoir storage when water recedes. Officials we interviewed from other districts said they revised a manual based on vulnerabilities identified through the periodic inspections they conduct of projects through the Corps’ dam safety program. District officials we interviewed said that the time and resources needed to revise manuals vary greatly, depending on the nature of the revisions and the complexity of the project, among other things. For instance, according to a Corps 2012 engineer regulation, all revisions to a water control manual are to undergo a quality control review of the science and engineering work by district leadership. Depending on the revisions made, manuals may also undergo a technical review by division leadership and an independent external peer review by a panel of experts. For example, according to a Corps engineer regulation and division and district officials we interviewed, if the districts make substantial revisions to a manual’s water control plan, they are to complete environmental analyses required by the National Environmental Policy Act of 1969, which they said involves considerable time and coordination with other federal agencies and opportunity for public comment. District officials told us that making such substantive revisions to a manual takes more time and resources than making an administrative revision because of the additional requirements for review. Moreover, some district officials noted that the longer they defer making revisions to a manual, the more extensive and complex the changes may become, changes that may add time and increase costs to revise the manual. Officials in one district said that it cost about $100,000 to revise one section of a manual’s water control plan, which did not significantly affect other aspects of the plan. In contrast, officials in another district said that it cost over $10 million and took over 25 years to revise a manual that included a water control plan for several projects, primarily because of litigation over the revisions. Our review of division documents indicates that all eight divisions we reviewed tracked the date a manual was last revised, but officials told us that the length of time since the last revision is not necessarily indicative of whether manuals need to be revised. According to headquarters and district officials we interviewed, water control manuals are designed to provide flexibility for a broad variety of runoff and climatic conditions. For example, headquarters officials said the rule curve in one water control manual provided guidelines for how much water operators should take out of the reservoir during October and November to meet its flood risk management target, while at the same time holding enough water to, among other things, meet its authorized purposes of hydropower and providing water flow for an endangered fish species. However, two knowledgeable stakeholders we interviewed said that many of the Corps’ rule curves assume that the chances of an extreme event are equally likely for any given year, which may not reflect actual conditions. These stakeholders said that the Corps should consider revising water control manuals with dynamic rule curves to account for potential changes to climate conditions, but a Corps official said that the science behind dynamic rule curves is still being developed. In addition, Corps officials said that the provisions in water control manuals that allow temporary deviations from water control plans, if necessary, provide districts with flexibility in operating projects. For example, in response to drought conditions, the Corps approved a deviation from the water control plan in December 2014 at a project in California, a deviation that allowed the Corps to temporarily retain water captured behind the dam following a rainstorm. According to officials in that district, this temporary deviation allowed them to respond to the immediate stakeholder interests in conserving water during the drought, so they did not need to revise the water control manual. Given the flexibilities provided by rule curves and temporary deviations, not all manuals need to be revised, according to Corps officials we interviewed at headquarters, divisions, and districts. However, the extent to which water control manuals have been substantively revised, if at all, remains unknown because the divisions and districts we reviewed did not track consistent information about revisions to water control manuals to help ensure that manuals are revised in accordance with engineer regulations. For example, based on our review of Corps documents, one of eight divisions tracked whether the water control plans in its water control manuals reflected actual operations of the project, but the remaining seven divisions did not. In addition, another division tracked information about when the water control manuals in five out of six of its districts had been revised. Officials whom we interviewed from this division said they were not sure if any of the manuals in the sixth district had been reviewed because information had not been submitted by the district. Corps headquarters officials said that the Corps does not track the status of water control manual revisions agency-wide because two people in headquarters oversee all of the Corps’ water resources operational issues, among other duties, and, therefore, divisions and districts were given responsibility for tracking revisions. However, these officials said the agency is compiling information to create a central repository of water control manuals, among other things, to respond to activities set forth in an action plan for the President’s Memorandum on drought resilience. They said the repository could be used to track the status of revisions or needed revisions of manuals, but they do not currently plan to do so. Furthermore, district officials we interviewed told us they have identified certain manuals needing revision, but they have not received the O&M funds they requested to revise these manuals and documentation shows that they do not track consistent information on these manuals. A Corps engineer manual states that there may be reasons—such as new hydrologic data or a reevaluation of water control requirements—to revise water control manuals to reflect current operating conditions. Divisions are responsible for prioritizing the O&M funding requests they receive from all of their districts. Corps budget documents describe factors to consider for agency-wide prioritization—such as whether an item is required to meet legal mandates or would help ensure project safety (e.g., by paving a project access road)—but headquarters officials said each division may add other factors for consideration. According to our document review, one of the eight divisions tracked the priority that districts assigned to revising water control manuals when requesting O&M funds during the budgeting process, and four divisions tracked the fiscal year they proposed revising certain manuals, pending available funding. However, most district officials we interviewed said revisions to water control manuals are often a lower priority than other O&M activities, such as equipment repairs, sediment removal, or levee repairs. As a result, districts may not get funding to revise water control manuals. Moreover, Corps headquarters officials said that each division and district varies in the resources and staff it has available to conduct water control manual reviews and make revisions. For example, officials we interviewed from two districts in the same division said they do not have staff available to review water control manuals, and they have not received the funding they requested to revise their water control manuals. Corps headquarters officials said they do not track which manuals the districts have requested funds to revise—and therefore cannot prioritize these requests—because they have limited staff to accomplish water resources management activities. However, internal control standards in the federal government call for agencies to clearly and promptly document transactions and other significant events from authorization to completion. Without tracking which manuals need revision, it is difficult for the Corps to know the universe of projects that may not be operating in a way that reflects current conditions as called for in the Corps’ engineer manual and prioritize revisions as needed. District officials whom we interviewed said that not revising water control manuals regularly could lead projects to operate inefficiently under changing conditions. For example, farmers downstream from one project wanted the Corps to consider changing operations so that their fields would not flood when it rained. However, officials in that district said they requested but did not receive the funds to revise the manual and could not fully address the farmers’ concerns. Officials in another district said they have requested funds to revise several manuals that they described as outdated, but because they have not received funds, they noted they were operating those projects in a way that differed from some aspects of the approved water control plans and they did not request deviations. Instead, they said they referred to handwritten notes and institutional knowledge to operate those projects. For example, officials said that due to sedimentation build up in the reservoir of one project, they are operating that project 22 feet higher than the approved plan. According to a Corps engineer regulation, the Corps develops water control plans to ensure that project operations conform to objectives and specific provisions of authorizing legislation. However, because some manuals that need revision have not been revised and, as some district officials noted, operations for certain projects differ from aspects of the approved water control plans in those manuals, the Corps lacks assurance that project operations are conforming to the objectives and specific provisions of authorizing legislation. The Corps has efforts under way to improve its ability to help respond to extreme weather events. These efforts include developing a strategy to revise its drought contingency plans and studying the use of forecasts to make decisions on project operations. The Corps is also conducting research on how to better prepare operations for extreme weather. To better respond to drought, the Corps is developing a strategy to analyze drought contingency plans in its manuals and devise methods for those plans to account for a changing climate. According to a 2015 Corps report on drought contingency planning, the Corps is developing the strategy because climate change has been and is anticipated to continue to affect the frequency and duration of drought in the United States. The Corps last systematically prepared drought contingency plans in the 1980s through the early 1990s, before climate change information was widely available. These plans assumed that historic patterns of temperature, precipitation, and drought provided a reasonably accurate model of future conditions. According to the Corps’ 2015 report, the agency subsequently identified and reviewed all of its drought contingency plans. The Corps’ review found (1) that none of the plans contained information on drought projections under future climate change and (2) that it was unlikely that the plans provided an adequate guide for preparing for future droughts. As of May 2016, the Corps was conducting pilot updates of drought contingency plans at five high-priority projects to help test methods and tools for those plans to account for a changing climate. According to the Corps’ 2015 report, these pilot projects will help the agency develop a framework for a systematic update of drought contingency plans. Corps officials said these pilots are to be largely completed by the end of calendar year 2016. The Corps has created an internal website available to all Corps officials to disseminate the results of the drought contingency plan analysis, pilot project results, and other drought-related information. In addition to completing the pilot projects, Corps officials said the agency plans to compile a list of drought contingency plan priorities by the middle of fiscal year 2017 for inclusion in the fiscal year 2018 budget. In addition to its efforts related to drought contingency plans, the Corps is studying the use of forecasting tools to determine whether water control manuals can be adjusted to improve water-supply and flood-control operations at two projects in California—Folsom Dam and Lake Mendocino. The Corps has historically used forecasts to some degree in its operations, largely by using models that create a single forecast based on the existing hydrologic data. According to Corps officials, the Folsom Dam and Lake Mendocino projects are evaluating the potential to incorporate forecasts into their operational rules, by using statistical techniques to simulate multiple, slightly different initial conditions and identify a range of potential outcomes and their probability. The use of forecasts at these projects will depend on whether the skill of the forecasts is improved to the point where they are viable in informing reservoir operations. Corps officials told us that the forecasts must be accurate in terms of space and time to allow the reservoirs to retain some water for future supply as long as the retained water can be safely released, if necessary, prior to the next storm. At the first project, Folsom Dam, the Corps and the Department of the Interior’s Bureau of Reclamation are constructing an auxiliary spillway project to improve the safety of the dam and reduce the flood risk for the Sacramento area. Officials also said the water control manual must be updated to reflect the physical changes to the project, but the Corps is also considering incorporating forecasting into its operating rules so that prior to storm events, water can be released earlier than without forecasting capabilities. Corps officials said the revisions to the Folsom Dam water control manual, outlining the forecast-based operations, are estimated to be completed in April 2017. For the second project, Lake Mendocino, an interagency steering committee was formed to explore methods for better balancing water supply needs and flood control by using modern forecasting observation and prediction technology. Corps officials told us the interagency committee expects to complete a preliminary viability study on the project by the end of calendar year 2017. Corps headquarters officials said that once they determine how forecasting can be incorporated into these projects, the agency may consider using forecast-based operations at other projects. Four of the five knowledgeable stakeholders we interviewed said that it would be important for the Corps to consider using such operations to help ensure efficiency and to be able to respond to changing patterns of precipitation. These views are consistent with our 2014 report on the Missouri River flood and drought of 2011 to 2013, in which we recommended that the Corps evaluate forecasting techniques that could improve its ability to anticipate weather developments for certain projects. However, Corps officials and knowledgeable stakeholders also said that the Corps faces two key challenges in implementing forecast-based operations at its reservoirs. First, four of the five knowledgeable stakeholders we interviewed said that the Corps’ primary mission of flood control makes it difficult for the agency to accept the uncertainty that is involved with forecasting. Second, forecasting may be more complex in certain regions of the country, because according to one knowledgeable stakeholder and Corps officials, much of the rain in California is a result of atmospheric rivers, which produce rainfall that is more predictable than the convection rains that are experienced in the Midwest. The Corps’ Responses to Climate Change program is conducting research on adaptation measures through vulnerability assessments for inland projects and sedimentation surveys. In 2012, the Corps initiated an initial vulnerability assessment that focused on how hydrologic changes due to climate change may impact freshwater runoff in some watersheds. This assessment identified the top 20 percent of watersheds most vulnerable to climate change for each of the Corps’ business lines. According to Corps officials, this assessment was conducted for watersheds, because actionable science was not currently available to conduct such an assessment at the project level. However, the Corps is working with an expert consortium of federal and academic organizations—including NOAA, the Bureau of Reclamation, USGS, the University of Washington, and the University of Alaska—to develop future projected climatology and hydrology at finer scales. This project is intended to provide the Corps and its partners and stakeholders with a consistent, 50-state strategy to further assess vulnerabilities, a strategy that will also support planning and evaluation of different adaptation measures to increase resilience to specific climate threats. According to the Corps, this consortium holds monthly meetings to review progress made by the various members. According to Corps officials, the consortium plans to release reports in 2016 and 2017 that will enable the Corps to improve tools, methods, and guidance for finer-resolution analyses using climate-impacted hydrology. The Corps has also begun to evaluate reservoir vulnerabilities to altered sedimentation rates resulting from extreme weather and land use changes. In 2012, the Corps began conducting 15 pilot studies at various districts to test different methods and serve as a framework for adapting to climate change. Two of these pilots predicted changes in the amount of sediment in a reservoir because of changes in hydrologic variables as a result of climate change. Additionally, according to the Corps’ website, reservoirs in areas with drought conditions have experienced lower-than- normal levels of water in their conservation storage pools. These lower levels have revealed additional and unexpected sedimentation in reservoirs that could reduce the space available to store water. In 2013, the Corps developed a program to deploy airborne laser scanning systems to measure and collect data on the reservoirs in drought-affected areas. In 2015, this system was tested in California to refine the process to collect sedimentation data and modify the system for specific aircraft. According to a Corps official we interviewed in the Responses to Climate Change program, the agency plans to further refine the data collected and evaluate how these data change over time. This effort, the official told us, is also expected to provide indicators to support the analysis of future sedimentation rates based on climate changes for use in the Corps’ climate vulnerability analysis. The official said a baseline report on the Corps’ reservoir sedimentation status is expected by the end of fiscal year 2016. This effort was highlighted in the action plan for the President’s Memorandum on Building National Capabilities for Long-Term Drought Resilience, which lays out a series of activities to fulfill the President’s drought-resilience goals. The Corps has revised some of the water control manuals used to operate its water resources projects, which serve important public purposes such as flood control, irrigation, and water supply. But district officials told us there are manuals that do not reflect the changing conditions in the areas surrounding the projects. A Corps engineer regulation states that the water control manuals should be reviewed no less than every 10 years and revised as needed. However, there is no Corps guidance on what activities constitute a review, and while district officials said they informally reviewed selected water control manuals through daily operations, they also said they do not document these reviews. Without developing guidance on what activities constitute a review of a water control manual and how to document that review, the Corps does not have reasonable assurance that its districts will consistently conduct reviews and document them to provide a means to retain organizational knowledge and mitigate the risk of having that knowledge limited to personnel directly involved with these reviews. In addition, while the Corps has revised certain water control manuals in accordance with its engineer regulation, it does not track consistent information on revisions to its manuals. Furthermore, district officials said that they have requested funds to revise additional water control manuals as needed to reflect changing conditions, but they have not received those funds, and have not tracked consistent information about manuals needing revisions. However, internal control standards in the federal government call for agencies to clearly and promptly document transactions and other significant events from authorization to completion. Without tracking which manuals need revision, it is difficult for the Corps to know the universe of projects that may not be operating in a way that reflects current conditions as called for in the Corps’ engineer manual and to prioritize revisions as needed. Because some manuals that need revision have not been revised and some district officials noted that operations for certain projects differ from aspects of the approved water control plans in those manuals, the Corps lacks assurance that project operations are conforming to the objectives of authorizing legislation. To help improve the efficiency of Corps operations at reservoir projects and to assist the Corps in meeting the requirement of the Water Resources Reform and Development Act of 2014 to update the Corps’ 1992 reservoir report, we recommend that the Secretary of Defense direct the Secretary of the Army to direct the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers to take the following two actions: develop guidance on what activities constitute a review of a water control manual and how to document that review; and track consistent information on the status of water control manuals, including whether they need revisions, and prioritize revisions as needed. We provided a draft of this report for review and comment to the Department of Defense. In its written comments, reprinted in appendix I, the department concurred with our recommendations and noted that it will take steps to address these recommendations as it updates its guidance. In its comments, the department also stated that, as of May 2016, it had updated its Engineer Regulation 1110-2-240, Engineering and Design: Water Control Management. We incorporated this information into the report. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or fennella@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the individual named above, key contributors to this report included Vondalee R. Hunt (Assistant Director), Cindy Gilbert, Richard Johnson, Cynthia Norris, Dan Royer, Holly Sasso, Jeanette Soares, and Michelle R. Wong.
The Corps owns and operates water resource projects, including more than 700 dams and their associated reservoirs across the country, for such purposes as flood control, hydropower, and water supply. To manage and operate each project, the Corps' districts use water control manuals to guide project operations. These manuals include water control plans that describe the policies and procedures for deciding how much water to release from reservoirs. However, many of the Corps' projects were built more than 50 years ago, and stakeholders have raised concerns that these manuals have not been revised to account for changing conditions. The Water Resources Reform and Development Act of 2014 included a provision for GAO to study the Corps' reviews of project operations, including whether practices could better prepare the agency for extreme weather. This report (1) examines the extent to which the Corps has reviewed or revised selected water control manuals and (2) describes the Corps' efforts to improve its ability to respond to extreme weather. GAO reviewed the Corps' guidance on project operations; examined agency practices; and interviewed Corps officials from headquarters, all 8 divisions, and 15 districts—selected, in part, on regional differences in weather conditions. According to U.S. Army Corps of Engineers (Corps) officials, the agency conducts ongoing, informal reviews of selected water control manuals and has revised some of them, but the extent of the reviews and revisions is unclear because they are not documented or tracked, respectively. The Corps' engineer regulations state that water control manuals should be reviewed no less than every 10 years so that they can be revised as necessary. However, officials from all 15 districts GAO interviewed said they do not document informal reviews of water control manuals because they consider such reviews part of the daily routine of operating projects. The Corps does not have guidance, consistent with federal standards for internal control, on what activities constitute a review or how to document the results of reviews. Without such guidance, the Corps does not have reasonable assurance that it will consistently conduct reviews and document them to provide a means to retain organizational knowledge. The Corps' engineer regulations also state that water control manuals shall be revised as needed, but the extent to which manuals have been revised or need revision remains unknown because the Corps' divisions do not track consistent information about manuals. For example, based on GAO's review of the Corps' documents, one of the eight divisions tracked whether the water control plans in its water control manuals reflected actual operations of a project, but the remaining seven did not. While the Corps has revised certain water control manuals as called for by its regulations, district officials GAO interviewed said additional manuals need revision. However, the Corps does not track consistent information on manuals needing revision, in accordance with federal internal control standards. Without tracking which manuals need revision, it is difficult for the Corps to know the universe of projects that may not be operating in a way that reflects current conditions as called for in the Corps' engineer regulations. The Corps has efforts under way to improve its ability to respond to extreme weather, including developing a strategy to revise drought contingency plans and studying the use of forecasting to make decisions on project operations. To better respond to drought, the Corps is developing a strategy to analyze drought contingency plans in its water control manuals to account for a changing climate. As of May 2016, the Corps was conducting, as a pilot, updates of five projects' drought contingency plans to help test methods and tools for future use in other plans. The Corps is also studying the use of forecasting tools to improve water supply and flood control operations at two projects in California by evaluating if they can retain storm water for future supply as long as the retained water can safely be released, if necessary, prior to the next storm. Knowledgeable stakeholders GAO interviewed said it is important for the Corps to consider forecast-based operations at its projects to help ensure efficient operations and to be able to respond to changing patterns of precipitation. Corps officials said the agency may consider doing so once the two California projects are completed in 2017. GAO recommends that the Corps develop guidance on what constitutes a water control manual's review and how to document it and track which manuals need revision. The agency concurred with the recommendations.
The radio frequency spectrum is the part of the natural spectrum of electromagnetic radiation lying between the frequency limits of 3 kilohertz (kHz) and 300 GHz. Federal agencies use spectrum to help meet a variety of missions, such as national defense, law enforcement, weather services, and aviation communication. Nonfederal entities (which include commercial companies and state and local governments) also use spectrum to provide a variety of services. For example, state and local police departments, fire departments, and other emergency services agencies use spectrum to transmit and receive critical voice and data communications, while commercial entities use spectrum to provide wireless services, including mobile voice and data, paging, broadcast radio and television, and satellite services. See figure 1 for examples of how spectrum is used. In the United States, responsibility for spectrum management is divided between NTIA and FCC. NTIA and FCC jointly determine the amount of spectrum allocated for federal, nonfederal, and shared use. After this allocation occurs, in order to use spectrum, nonfederal users must obtain a license from FCC to use specific spectrum frequencies, and federal users must obtain a similar authorization from NTIA—usually referred to as a frequency assignment. In addition to its spectrum allocation and authorization duties, NTIA serves as the President’s principal advisor on telecommunications and information policy and manages federally assigned spectrum, including preparing for, participating in, and implementing the results of international radio conferences, as well as conducting extensive research and technical studies through its research and engineering laboratory, the Institute for Telecommunication Sciences. NTIA has authority to issue rules and regulations as may be necessary to ensure the effective, efficient, and equitable use of spectrum both nationally and internationally. It also has authority to develop long-range spectrum plans to meet future government spectrum requirements, including those of public safety. In addition to NTIA and FCC, there are other entities involved in spectrum management: The Office of Management and Budget (OMB) is involved in managing agency spectrum use through the budget process. OMB’s Circular A-11, Section 33.4, directs agencies to consider the economic value of spectrum when requesting funding to procure a spectrum-dependent system. The circular states that spectrum should generally not be considered a free resource, but rather should be considered to have value and be included, to the extent practical, in economic analyses of alternative systems. IRAC—an interagency advisory committee—was established in 1922 to coordinate federal use of spectrum and provide policy advice on spectrum issues. It is comprised of representatives from 19 federal agencies that use spectrum. IRAC’s mission and placement have evolved over its 80-year history. IRAC was originally organized by federal agencies that were seeking a way to resolve issues related to federal spectrum use in a cooperative manner; its initial mission was to assist in the assignment of radio frequencies to federal users and to coordinate federal government spectrum use. In 1952, its mission was expanded to include formulating and recommending policies, plans, and actions for federal government spectrum use. Currently, IRAC is primarily involved in the frequency assignment and system certification processes and is chaired by NTIA, whose role as chair is to call IRAC meetings, establish IRAC agendas, and manage other tasks associated with the administrative operations of IRAC. The Commerce Spectrum Management Advisory Committee (CSMAC)—a federal advisory committee—provides advice and recommendations to NTIA. This advisory committee is organized through NTIA’s Office of Policy Analysis and Development and was created following a recommendation made in President Bush’s 21st Century Spectrum Policy Initiative. CSMAC consists of approximately 25 spectrum policy experts from the private sector and it offers expertise and perspective on long- range spectrum planning, as well as other issues, and makes recommendations to NTIA to facilitate this planning. CSMAC was organized in 2006, and operates under the provisions of the Federal Advisory Committee Act. Currently, there are three ongoing spectrum-related initiatives aimed at identifying spectrum that can be made available to meet the nation’s demand for commercial wireless broadband services. These initiatives include (1) a recommendation in the National Broadband Plan, (2) a June 28, 2010, presidential memorandum, and (3) the NTIA Fast Track Evaluation. The National Broadband Plan recommends that a total of 500 MHz of federally and nonfederally allocated spectrum be made available for mobile, fixed, and unlicensed broadband use over the next 10 years. This spectrum can come from several different frequency ranges and would be made available for a variety of licensed and unlicensed flexible commercial uses, as well as to meet the broadband needs of specialized users such as public safety, energy, educational, and other users. The plan states that for spectrum from the 225 MHz to 3.7 GHz range, a total of 300 MHz should be made available for mobile flexible use within 5 years. On June 28, 2010, the President issued a memorandum directing NTIA to begin identifying federally allocated spectrum that can be made available for wireless broadband. This memorandum, in line with the National Broadband Plan, directs NTIA to collaborate with FCC to develop a plan and timetable to make the 500 MHz of federally and nonfederally allocated spectrum available for wireless broadband use in the next 10 years. A joint request from OMB, the National Economic Council, and the White House’s Office of Science and Technology Policy requested that NTIA identify and make available federally allocated spectrum for broadband use in the next 5 years. In response to this request, NTIA analyzed federally assigned spectrum to determine the feasibility of making certain federally allocated spectrum bands available for broadband use, referred to as the Fast Track Evaluation. In addition, legislation has also been introduced in the House and Senate that would help identify spectrum or relocate spectrum for commercial uses, including (1) the Spectrum Inventory and Auction Act of 2011, and the (2) Reforming Airwaves by Developing Incentives and Opportunistic Sharing Act, which would require an inventory of existing users on prime radio frequencies; (3) the Spectrum Optimization Act, which would provide FCC with authority to conduct incentive auctions; and (4) the Spectrum Relocation Improvement Act of 2011, which would clarify the rights and responsibilities of federal users in the spectrum relocation process. As the federal agency authorized to develop national spectrum policy, NTIA has been directed to conduct several projects focused on reforming governmentwide federal spectrum management and promoting efficiency among federal users of spectrum; however, its efforts in this area have resulted in limited progress toward improved spectrum management. NTIA has authority to, among other things, establish policies concerning assigning spectrum to federal agencies, coordinate spectrum use across federal agencies, and promote efficient use of spectrum resources by federal agencies in a manner which encourages the most beneficial public use. As such, NTIA has a role in ensuring that federally allocated spectrum is used efficiently. According to NTIA’s Redbook and agency officials, efficient use includes ensuring that federal agencies’ decisions to use spectrum to support government missions have been adequately justified and that all viable tradeoffs and options have been explored before making the decision to use spectrum-dependent technology, and ensuring that these tradeoffs are continuously reviewed to determine if the need for spectrum has changed over time. NTIA’s primary guidance to federal agencies is technical guidance concerning how to manage assigned spectrum provided through NTIA’s Redbook. In May 2003, the Bush Administration directed NTIA to develop two strategic plans, yet it has only completed one. At that time, the Bush Administration launched the Spectrum Policy Initiative for the 21st Century, which recognized the rapidly increasing role for wireless services and demands on the use of the radio frequency spectrum. In response to this initiative, NTIA stated it would produce two plans. First, NTIA would produce a federal strategic spectrum plan to address governmentwide spectrum needs. Specifically, the Bush Administration directed federal agencies to develop individual strategic spectrum plans, which would then be compiled by NTIA along with input from other stakeholders such as FCC and state and local governments, to form a governmentwide strategic spectrum plan. Second, NTIA was to use the federal strategic spectrum plan to assist in developing a national spectrum plan to address comprehensive federal and nonfederal spectrum needs. NTIA responded to this directive by stating it would produce a national spectrum plan, and encourage state, regional, and local government agencies to synthesize long-range planning processes into a nonfederal government strategic spectrum plan which would also provide input into the national strategic spectrum plan. Additionally, NTIA stated that it would invite FCC to provide information regarding the future requirements of nonfederal government spectrum to be included in the national strategic spectrum plan. In March 2008, NTIA issued its report on federal spectrum use entitled The Federal Strategic Spectrum Plan. Neither NTIA nor FCC has issued the national spectrum plan that was initially scheduled for completion in December 2007. While the intent of the Federal Strategic Spectrum Plan was to identify the current and projected spectrum requirements and long-range planning processes for the federal government, we found the final plan is limited in these areas. For example, the plan does not identify or include quantitative governmentwide data on federal spectrum needs. Instead, NTIA’s plan primarily consists of a compilation of the plans submitted by 15 of the more than 60 agencies that use federal spectrum. Additionally, due to the fact that they contained limited information regarding future requirements and technology needs, NTIA concluded that its “long-range assumptions are necessarily also limited.” Furthermore, NTIA’s plan did not contain key elements and best practices of strategic planning, which the Government Performance and Results Act, OMB, and we have identified as including the following elements: identification of long-term goals and objectives, approaches or strategies to achieve these goals and objectives, an ongoing process for revising the plan approximately every 3 years. For example, NTIA’s plan does not include a discussion of long term goals and objectives for governmentwide spectrum management, or approaches and next steps for achieving these goals. Also, whereas strategic planning is intended to be continuous, not a static or occasional event, we found that NTIA’s strategic planning activities are not ongoing. For example, while agencies were required to update their strategic plans every 2 years, they have not submitted plans to NTIA since November 2007, when 14 agencies submitted plans. We found that NTIA does not appear to be meeting its responsibilities as directed by President Bush’s 2004 memorandum. As shown in appendix II, NTIA discontinued many of the governmentwide projects initiated by the Spectrum Policy Initiative for the 21st Century, demonstrating a lack of continuity in its spectrum management operations. For example, NTIA was directed to issue annual progress reports on the status of the initiatives. While NTIA issued four annual progress reports from fiscal years 2005 through 2008, these reports focused on detailing the individual activities agencies have undertaken to improve their spectrum management and provided limited information on actions NTIA is taking to improve governmentwide use of spectrum. Furthermore, NTIA has not issued a progress report since fiscal year 2008. We asked NTIA officials why the agency was not implementing many of the presidential initiatives, and they said due to limited resources the agency has decided to put its strategic planning activities on hold, and has instead turned its focus to recent initiatives directed by the Obama Administration. Based on our conversations with NTIA officials, it is unclear when or if NTIA will resume its forward-looking strategic planning activities. See appendix II for a full list of NTIA activities focused on reforming governmentwide spectrum management and the status of the activities as of February 2011. NTIA’s primary spectrum management operations include authorizing federal frequency assignments and certifying spectrum-dependent equipment for federal users; however, these processes are primarily focused on interference mitigation as determined by IRAC and do not focus on ensuring the best use of spectrum across the federal government. IRAC, an interagency committee of the federal government’s primary spectrum users, includes six subcommittees and several ad hoc working groups. Two IRAC subcommittees play significant roles in two of NTIA’s key processes—frequency assignment and system certification. These subcommittees, the Frequency Assignment Subcommittee (FAS), which includes representatives from the 19 IRAC agencies and FCC, and the Spectrum Planning Subcommittee (SPS), which includes representatives from 17 of the IRAC agencies, review all requests for new spectrum assignments by federal agencies and make recommendations to NTIA on the outcomes.As shown in table 1, final decisions regarding approval and use of federally allocated spectrum are made based on IRAC review and committee consensus. Currently the process as established by federal regulations for review and approval of frequency assignment and system certification is technical in nature, focusing on ensuring that the new frequency or system that an agency wants to use will not interfere with another agency’s operations. According to NTIA officials, this focus on day-to-day spectrum activities, such as interference mitigation, is due to the agency’s limited resources. This focus, while important, makes limited consideration about the overall best use of federally allocated spectrum. Therefore, NTIA’s current processes provide limited assurance that federal spectrum use is evaluated from a governmentwide perspective to ensure that decisions will meet the current and future needs of the agencies, as well as the federal government as a whole. Additionally, throughout these processes, there is heavy reliance on agencies to self-evaluate and report their current and future spectrum needs. For example, in the frequency assignment process, all analysis to determine whether spectrum-dependent technology should be used is made by the agencies prior to a request for authorization, therefore agencies are expected to have adequate expertise and resources to make these determinations. Finally, NTIA has limited ability to monitor federal spectrum use. NTIA has four programs in place to oversee agency use of spectrum, yet according to NTIA officials, only one program is actively implemented, one is conducted on an as-needed basis, and two programs have been discontinued due to lack of resources, as shown in table 2. Without ongoing programs to monitor that agencies are using their assigned spectrum in accordance with federal regulations, NTIA is limited in its ability to track how federally allocated spectrum is being used or detect Redbook violations. NTIA’s data management system is antiquated and lacks transparency and internal controls. NTIA collects all federal spectrum data in the Government Master File (GMF), which according to NTIA officials is an outdated legacy system that was developed primarily to store descriptive data. This system does not meet the current analytical needs of NTIA or other federal users. NTIA does not generate any data, but maintains agency-reported spectrum data in the GMF, which are collected during the frequency assignment and review processes, as shown in figure 2. NTIA’s processes for collecting and verifying GMF data lack key internal controls including those focused on data accuracy, integrity, and completeness. We have defined internal control activities as the policies, procedures, techniques, and mechanisms that help ensure that agencies mitigate risk. Control activities such as data verification and reconciliation are essential for ensuring accountability for government resources and for achieving effective and efficient program results. Additionally, the standards for internal controls recommend that agency systems have controls in place to ensure data accuracy, including processes for ensuring the agency’s data entry design features contribute to data accuracy; data validation and editing are performed to identify erroneous data; erroneous data are captured, reported, investigated, and promptly output reports are reviewed to help maintain data accuracy and validity. We found that NTIA’s data collection processes lack accuracy controls and do not provide assurance that data are being accurately reported by agencies. For example, the data are generally only subject to compliance reviews that ensure all reported data meet technical and database parameters (i.e., that they have the proper number of characters per field, or that the frequency requested is allocated for desired use). Throughout this process, NTIA expects federal agencies to supply accurate and up-to- date data submissions. For example, during the frequency assignment process, a federal agency must justify that the assignment will fulfill an established mission need and that other means of communication, such as commercial services, are not appropriate or available. However, NTIA does not provide agencies with specific requirements on how to justify these needs. NTIA officials told us that they rely on federal agencies to conduct any necessary analysis, such as engineering and technical studies, to support the use and need of the assignment, but agencies are not required to submit documentation verifying that the agency had completed the analysis necessary to justify the agency’s spectrum need. Moreover, NTIA does not require federal spectrum managers to validate or verify that the data or information program offices do submit is accurate. According to NTIA officials, if NTIA or other agencies identify errors, NTIA requires the correction of these data. However, since agencies submitting data do not have to attest to their accuracy or demonstrate the extent to which they are actually using the spectrum which they have, NTIA has limited assurance that information used to make spectrum management decisions is accurate and reliable. NTIA is developing a new data management system—Federal Spectrum Management System (FSMS)—to replace GMF. According to NTIA officials, the new system will modernize and improve spectrum management processes by applying modern information technology to provide more rapid access to spectrum and make the spectrum management process more effective and efficient. Whereas the GMF is only a descriptive database used to store information, it does not have analytical capabilities that agencies can use when they are conducting the technical studies required by the frequency assignment and certification processes. FSMS is intended to provide these analytical capabilities and will allow federal agencies to conduct more consistent and accurate analysis when developing frequency assignment proposals. Ultimately this will facilitate the more efficient use of spectrum because frequency assignments can be located closer together. Currently, the limited data available on frequency assignments results in users overestimating their needs to avoid interference; the additional data that will be made available will allow users to make more accurate judgments when determining interference. As part of the development of FSMS, the existing GMF data will be replaced with a new data structure, yet development is still early and final implementation is not expected until fiscal year 2014. FSMS will increase the amount of data agencies are required to submit to NTIA, but the data submission process will remain similar to its current structure. NTIA projects FSMS will improve existing GMF data quality, but not until 2018. According to NTIA’s FSMS transition plan, at that time data accuracy will improve by over 50 percent. However, in the meantime it is unclear whether important decisions regarding current and future spectrum needs are based on reliable data. Federal agencies and departments combined have over 240,000 frequency assignments, which are used for a variety of purposes, including emergency communications, national defense, land management, and law enforcement. Over 60 federal agencies and departments currently have federal spectrum assignments. Agencies and departments within DOD have the most assignments, followed by FAA, the Department of Justice, the Department of Homeland Security, U.S. Coast Guard, the Department of the Interior, the Department of Agriculture, the Department of Energy, and the Department of Commerce, respectively. These federal agencies and departments hold 93 percent of all federally assigned spectrum (see figure 3). As illustrated in figure 4, less than one-third of all frequency assignments held by federal agencies are located in the high-valued range (generally considered the spectrum bands located above 300 MHz and below 3 GHz). In contrast, over 48 percent of the spectrum held by federal agencies is located in the 30–300 MHz range. The 18 IRAC agencies responding to our survey reported holding some spectrum assignments in the high-value range. 300.1 MHz–3 GHz (high-value range) Through our survey and interviews with federal agency officials, we found that federal agencies use spectrum, including high-valued spectrum, for a wide array of purposes. As illustrated in figure 5, IRAC agencies reported using federally assigned spectrum for emergency communications, managing and protecting federal property or personnel, law enforcement, research, and safety. As an example of use in the high-value range, the Department of the Air Force reported in response to our survey using spectrum for mission-critical military training and education, testing of new equipment, research and development, and disaster response, in concert with other agencies. Federal agencies also operate a variety of spectrum-dependent systems and equipment on assigned spectrum. Within the high-value range (300 MHz–3 GHz), IRAC agencies reported operating a wide variety of systems. The most frequently reported systems in that range included land mobile radio systems, fixed microwave systems, and fixed microwave point-to- point radio systems. These systems are typically used for voice and data communication and while they can be operated in other frequency bands outside of the high-value range, this range includes the most commonly used frequencies for these systems. NTIA has not established specific requirements for agencies to justify their needs and to validate and verify data used to evaluate their current and future spectrum needs. Federal spectrum managers we contacted reported that when applying for an assignment, they generally request field program staff to provide a description of how the frequency will be used and the type of equipment needed for the assignment. One federal agency official told us that his office has to trust that assignment application information provided by program staff is accurate. Additionally, 6 out of the 10 federal spectrum managers we contacted told us that while they review an application before submitting it to NITA, their review primarily serves to ensure that sufficient information has been provided to meet the requirements of the Redbook. For example, a federal agency official told us that when examining a frequency assignment application, some of the factors that he reviews are availability of spectrum to be used with a specific technology, potential for interference with other users, and compliance of frequency use with NTIA rules and regulations. As part of NTIA’s Frequency Assignment Review Program, federal spectrum users are required to modify or delete frequency assignments as needed based on the results of the 5-year reviews. However, as with the assignment process, federal spectrum managers are not required to validate or verify that the information the program offices are submitting is accurate. Seven out of 10 federal spectrum managers we contacted reported that they do not have mechanisms in place to verify the accuracy of the information collected during these processes. Similarly, 5 out of 10 federal spectrum managers reported that their agency had not conducted site visits or sample surveys to verify information in their data systems. Further, federal agency officials expressed various concerns related to the process of obtaining information from field program staff when completing assignment reviews, including concerns about (1) the future availability of spectrum, (2) inaccurate data on existing systems, and (3) resource constraints and staff coordination. In our survey, 15 out of 18 IRAC agencies reported that they will face some or great difficulty in the future meeting their critical mission needs because of insufficient spectrum. Similarly, 4 out of the 10 federal spectrum managers we contacted told us that while their agency’s spectrum needs are increasing, requesting new assignments is becoming increasingly difficult due to the limited availability of additional spectrum. According to these spectrum managers, field program personnel are concerned that if they say they are no longer using an assignment, it will be deleted and the program office will not be able to obtain another assignment for their future spectrum needs. In one specific example, a federal spectrum manager we contacted told us that the agency’s border security duties have increased significantly over the last few years, resulting in the agency’s increased use and dependence on spectrum for security purposes. However, while the agency’s spectrum needs have increased, the availability of spectrum has remained the same, raising concerns about the agency’s access to sufficient spectrum to complete operational mission requirements. Of the three agencies we contacted that had previously completed site visits or in-depth reviews of assignment data, federal agency officials from two of these agencies reported uncovering significant inaccuracies in their assignment records. For example, officials from one agency told us that in a recent review of a sample of spectrum assignments in the Detroit, Michigan, metropolitan area, they uncovered that approximately half of the agency’s assignment records were inaccurate. In another example, a spectrum manager told us that the agency conducted a review of spectrum assignments and found that 25 percent of assignments in one department (20 assignments) were no longer being used. As a result of this review, the agency returned the assignments. Because the other federal agencies we interviewed did not indicate that they had completed site surveys or in- depth reviews of their assignment records, the extent to which there are data errors in other agencies’ assignment data is unknown. One agency we met with had difficulty ascertaining whether a program office was operating a system on an assignment. In this case, the agency relocated several systems off of the 1710–1755 MHz band as a result of the Advance Wireless Services auction in 2007. Shortly after the relocation, the agency was contacted by a commercial wireless carrier that had acquired the frequency informing the agency that it still had a system transmitting on the frequency, causing interference. The agency contacted its regional program office and discovered that a transmitter at the identified location had not been actively used by the agency for years but was emitting a carrier signal, which was the source of the interference. Once the transmitter was shut off the interference on that frequency stopped. According to the agency’s spectrum manager, regional program officials never notified the agency about the system’s existence, and as a result, there was no record of the system in the agency’s inventory list. Agency officials acknowledged that had they not been contacted by the commercial wireless carrier, they would not have known that the transmitter was still operating and sending out a carrier signal. While OMB Circular No. A-11, §33.4 and NTIA require that federal agencies obtain an authorization to use a spectrum frequency assignment before they purchase spectrum-dependent systems, 5 out of 10 agency spectrum managers that we contacted reported that their agency does not have procedures in place to monitor the agency’s procurement of spectrum- dependent systems prior to obtaining an assignment. Seven out of 10 spectrum managers explained that due to high staff turnover, identifying the appropriate contacts in the field to complete assignment reviews can be difficult. One federal spectrum manager explained that since field program staff are generally located in multiple offices across the country, it is challenging to keep track of all the appropriate contacts in each office every 5 years. Some spectrum managers also noted that resource constraints limit their ability to validate information obtained from program staff. Specifically, through our interviews and IRAC survey, spectrum managers told us that competing mission priorities limit their ability to verify the accuracy of information obtained from program offices. One survey respondent stated that a key challenge to completing frequency assignment reviews is balancing available spectrum management resources with other competing priorities. Another spectrum manager stated that validating and verifying the information for each assignment record, which could entail conducting site visits or surveys, would require significant spectrum management resources that federal agencies do not currently have. Five out of 10 spectrum managers reported difficulties ensuring that program offices communicated with them before purchasing a spectrum- dependent system. Federal officials from one agency told us that approximately 30 percent of the time, program offices at the agency procure spectrum-dependent equipment without first notifying the agency spectrum managers, and in some cases, before the assignment has been granted. In another example, a spectrum manager reported that a program office purchased a spectrum-dependent system to operate on an assignment before receiving authorization to operate on the frequency. The frequency assignment application was eventually denied because the program office had purchased a system that could not be operated on federally assigned spectrum and the agency had to place the equipment in storage where it remained unused. In response to the recent initiatives to make a total of 500 MHz of spectrum available for wireless broadband, NTIA has (1) identified 115 MHz of federally allocated spectrum to be made available for wireless broadband use within the next 5 years, referred to as the Fast Track Evaluation, and (2) developed an initial plan and timetable for repurposing additional spectrum for broadband, referred to as the 10-Year Plan. Fast Track Evaluation. NTIA and the Policy and Plans Steering Group (PPSG) identified and recommended portions of two frequency bands, totaling 115 MHz of spectrum within the ranges of 1695–1710 MHz and 3550–3650 MHz to be made available for wireless broadband use. In November 2010, NTIA publicly released its results. In its final report, NTIA summarized its analysis of four frequency bands: 1675–1710 MHz, 1755– 1780 MHz, 3500–3650 MHz, and 4200–4400 MHz. For these bands, NTIA reviewed the number of federal frequency assignments within the band, the types of federal operations and functions that the assignments support, and the geographic location of federal use. Additionally, NTIA applied the following criteria to identify the 115 MHz of spectrum: the band must be able to be made available within 5 years, the band must be between 225 MHz and 4400 MHz, the decision to recommend bands for repurposing could be made prior to October 1, 2010 (therefore due to time constraints decisions would not require relocation of federal users), and opportunities for geographic or other sharing within the bands must have already been successfully proven. Since clearing these bands of federal users and relocating incumbent federal users to new bands was not an option in the given time frame, the bands that NTIA recommended be made available will be opened to geographic sharing by incumbent federal users and commercial broadband. 10-Year Plan. By a presidential memorandum, NTIA was directed to collaborate with FCC to make available 500 MHz of spectrum over the next 10 years, suitable for both mobile and fixed wireless broadband use, and complete by October 1, 2010, a specific plan and timetable for identifying and making available the 500 MHz for broadband use. NTIA publicly released this report in November 2010. In total, NTIA and the National Broadband Plan identified 2,264 MHz of spectrum to analyze for possible repurposing, of which 639 MHz is exclusively used by the federal government and will be analyzed by NTIA. Additionally, NTIA will collaborate with FCC to analyze 835 MHz of spectrum that is currently located in bands that are shared by federal and nonfederal users. Furthermore, NTIA has stated that it plans to seek advice and assistance from CSMAC, its federal advisory committee comprised of industry representatives and experts, as it conducts analyses under the 10-Year Plan. NTIA officials said that they will prioritize the bands identified for evaluation based on the factors in table 3, with the bands that best fulfill this criteria being evaluated for potential repurposing first. Following prioritization, NTIA, with the assistance of the federal agencies, will characterize each band to determine the extent of federal use in the band. After each band is characterized, further analysis will be conducted to evaluate the technical, operational, and cost effects that repurposing would have on the federal agencies. In January 2011, NTIA announced that it had selected the 1755–1850 MHz band as the first priority for detailed evaluation under the 10-Year Plan. According to NTIA, this band was given top priority for evaluation by NTIA and the federal agencies, based on a variety of factors, including industry interest and the band’s potential for commercial use within 10 years. Agencies currently operating in this band have been notified of the pending evaluation, and NTIA and PPSG have identified comparable bands for agency operations. Affected agencies are now conducting analyses to determine which of these comparable bands best meets their needs and will provide NTIA with their input in spring 2011. According to NTIA officials, a decision on how to proceed with its analysis will be made in June 2011. This is not the first time NTIA has studied these bands. These bands were previously evaluated for reallocation, and in 2001, we reported that at the time adequate information was not currently available to fully identify and address the uncertainties and risks of reallocation. Affected federal agencies reported difficulties in providing the impact analysis required for NTIA’s Fast Track Evaluation, raising concerns that larger scale future analysis may be impacted. The evaluation required Navy, NOAA, and FAA to analyze and submit a significant amount of detailed impact analyses that were not readily available, according to officials with those agencies. Further, Department of the Navy and U.S. Marine Corps officials said they were required to conduct analyses based on a number of different scenarios to determine what the impact might be for mission performance by making various spectrum bands available for wireless broadband. According to one Navy official, while DOD collects a large amount of data on its spectrum-dependent systems, NTIA’s request required DOD to conduct a time-consuming, in-depth analysis on the operational impact of repurposing certain spectrum bands. NTIA officials recognize that completing this analysis required significant agency resources, but they noted that agencies were the only ones with the requisite expertise to complete the analysis. In response to our survey, the Department of the Navy and the Department of the Air Force expressed concerns over data accuracy as a result of the short time frame given to them to collect the data. One official stated that the speed of identifying available spectrum appeared more important than the accuracy of the data. According to a DOD official, these data requests were time-consuming because they required regional spectrum managers to identify and contact all field program offices using spectrum- dependent systems in the band being analyzed to determine their use of spectrum and how their mission performance would be affected if the band were no longer available for federal use. Four IRAC agencies that completed our survey—NOAA, Department of the Air Force, Department of the Navy, and Department of the Army—expressed further concerns about the resources required to collect spectrum data for the Fast Track Evaluation. In addition to the challenges that federal agencies reported in gathering data, making the 115 MHz of spectrum available for wireless broadband will have operational effects on agencies. For example, according to NTIA’s Fast Track Evaluation, as a result of the decision to make the 1695– 1710 MHz band available for wireless broadband, NOAA will have to redesign its next generation of Geostationary Operational Environmental Satellite-R series (GOES-R) satellites. According to NOAA, this redesign will increase costs and delay implementation. Additionally, NTIA does not expect DOD to experience any immediate operational impacts due to the repurposing of the 3550–3650 MHz band; however, such a repurposing based on exclusion zones will limit DOD’s future flexibility to implement new systems or operate at new locations. As table 4 illustrates, NOAA and DOD will be the primary agencies affected by the decision to make this spectrum available. Further, data- and resource-related challenges could affect implementation of NTIA’s 10-Year Plan. As experienced in previous relocations, inaccurate and incomplete data submitted by agencies can impact the transition time from federal to commercial use once reallocated spectrum has been auctioned by FCC and purchased by commercial users. During the relocation of federal users as a result of the Advance Wireless Service spectrum auction in 2006, according to a winning bidder of the spectrum, some agencies submitted inaccurate inventory data to NTIA and OMB causing delays in the transition from federal to commercial use. As previously discussed, federal agencies faced resource challenges in providing NTIA data on system inventory, operational use, and operational impacts. These challenges raise concerns because the Fast Track Evaluation focused on only 115 MHz of spectrum, while NTIA is now expecting to evaluate 1,474 MHz of spectrum, meaning these challenges could be magnified. Without adequate and timely funding for agencies to conduct research and planning, the goals of the 10-Year Plan and timetable may not be achieved. In previous auctions, as part of the Commercial Spectrum Enhancement Act (CSEA),agencies have been reimbursed for their relocation costs through the Spectrum Relocation Fund. CSEA does not provide agencies with up-front funding to conduct detailed analysis during the spectrum evaluation phase. The lack of funding may delay analysis and band characterization for repurposing, as agencies have limited staff and resources to dedicate to data collection and band analysis. This can be problematic because agencies have reported significant costs associated with collecting the data and conducting the analysis requested by NTIA. For example, a DOD official told us he committed 400 staff hours to collecting operational impact data for the Fast Track Evaluation for two affected DOD systems; under the 10-Year Plan, the official expects to have to collect and prepare operational impact data for 120 systems. To address this funding issue, NTIA stated in the Fast Track Evaluation analysis that changes to expand the CSEA would be needed to provide agencies with up-front funding for analysis and planning related to repurposing. According to NTIA officials, without this funding, agencies will not be able to conduct adequate analysis for the 10-Year Plan, and currently NTIA does not have a plan to address these challenges if this funding is not made available. Industry stakeholders, including wireless service providers, representatives of an industry association, and a think tank representative we contacted expressed concerns over the usefulness of the spectrum identified by NTIA in the Fast Track Evaluation, since most of the spectrum identified (100 of the 115 MHz) is outside the range considered to have the best propagation characteristics for mobile broadband. Overall, there has been limited interest in the bands above 3 GHz for mobile broadband use because, according to industry stakeholders, there have been minimal technological developments for mobile broadband in bands above 3 GHz and no foreseeable advances in this area at this time. According to industry representatives, the 1755–1780 MHz band that NTIA considered as part of the Fast Track Evaluation has the best characteristics for mobile broadband use, and it is internationally harmonized for this use. NTIA did not select this band to be made available in the 5-year time frame due to the large number of federal users currently operating there. Recently, however, NTIA has identified it as the first band to be analyzed under the 10-Year Plan to determine if it can be made available for commercial broadband use. An industry stakeholder has stated that the 1695–1710 MHz band identified by NTIA in the Fast Track Evaluation is the second-best alternative for wireless broadband if the 1755–1780 MHz band were not made available; however, the 1695–1710 MHz band is not currently used internationally for wireless broadband, which may reduce device manufacturers’ incentive for developing technology that can be used in these frequencies. Additionally, an industry stakeholder also expressed concern over the exclusion zones established by NTIA in the 1695–1710 MHz band, which would make the band unavailable for wireless broadband in select major cities across the United States that account for over 12 percent of the U.S. population. Similarly, one industry stakeholder has also noted that the exclusion zones NTIA has established for the 3550–3650 MHz band would prevent wireless broadband access along the entire East and West coasts. Considering the geographic exclusion zones and the location of the spectrum above 3 GHz, an industry stakeholder we contacted said that they are not as immediately interested in this spectrum as they are in the 1755–1780 MHz band, which, according to one industry stakeholder, may impact future spectrum auction prices. On March 8, 2011, FCC released a Public Notice seeking comment on steps the Commission can take to best promote wireless broadband deployment in the 1695-1710 MHz and 3550- 3650 MHz bands. Amongst other things, FCC sought comment on the extent to which these bands could be made available for broadband deployment; how the conditions placed on the bands, such as the exclusion zones, could affect their usefulness for broadband deployment; and whether broadband technologies are readily available to operate on these bands. While spectrum auctions can generate substantial funds for the U.S. Treasury—for example, the Advance Wireless Services auction that took place in September 2006 fetched over $13.7 billion, a portion of which went to the U.S. Treasury—if industry participants are not as interested in the spectrum being auctioned, lower bids would be expected. Agencies are currently reimbursed with funding from auction revenue for data collection, analysis, and planning-related costs, after costs for relocating federal users have been paid. Lack of industry interest in spectrum above 3 GHz creates concerns as to whether large amounts of spectrum will be able to meet the minimum price at auction, which the CSEA has set at 110 percent of federal relocation costs. Since relocating federal users is likely as part of the 10-Year Plan, if the reserve is not met, agencies may not be reimbursed for their data collection, analysis, and planning costs. As previously stated, NTIA officials have raised concerns that without this funding, agencies will not be able to conduct adequate analysis for the 10- Year Plan. Currently NTIA does not have a plan to address these challenges if this funding is not made available. Radio frequency spectrum is a scarce national resource that enables wireless communications services vital to the U.S. economy and to a variety of government functions, yet NTIA has not developed a strategic, governmentwide vision for managing federal use of this valuable resource. NTIA’s spectrum management authority is broad in scope, but NTIA’s efforts do not align with its authorities. Its focus is on the technical aspects of spectrum management, such as ensuring new frequency assignments will not cause interference to spectrum-dependent devices already in use, rather than on whether new assignments should be approved based on a comprehensive evaluation of federal spectrum use from a governmentwide perspective. NTIA officials noted that due to limited resources, the agency has put its strategic planning activities on hold and has instead turned its focus to recent initiatives directed by the Obama Administration. However, lacking an overall strategic vision, NTIA cannot ensure that spectrum is being used efficiently by federal agencies. Agencies are supposed to review all their spectrum assignments every 5 years and delete any assignments not essential to their missions; however, we found that these reviews are often perfunctory. Furthermore, agencies have concerns about not having access to sufficient spectrum in the future to meet mission-critical needs and therefore might be reluctant to relinquish any assignments for fear they will be unable to get more spectrum in the future. The absence of requirements for agencies to submit justifications for their spectrum use combined with NTIA’s limited oversight of the agencies has led to decreased accountability and transparency in how federal spectrum is actually being used and whether the spectrum-dependent systems the agencies have in place are necessary. However, federal agency officials face challenges—such as staff turnover and resource constraints—when coordinating with field program staff to obtain the information necessary for the frequency assignment applications and reviews. Given that verifying the information for each frequency assignment record could require significant spectrum management resources that federal agencies might not currently have, it would be beneficial for NTIA to consider options for a different approach to obtain critical assignment information from the agencies. Approaches may include efforts such as requiring agencies to conduct site surveys of their spectrum-dependent systems, attesting to the accuracy of the data provided to NTIA, or making changes to the structure of the 5-year review program. As part of its spectrum management processes, NTIA depends primarily on an antiquated data collection system and does not have a mechanism in place to validate and verify the accuracy of spectrum-related data submitted by the federal agencies. The data management system also lacks transparency and internal controls, which are essential for ensuring accountability for government resources and for achieving effective and efficient results. Although NTIA is developing its new FSMS, full implementation is still years away. In the meantime, without meaningful data validation requirements, NTIA has limited assurance that the agency- reported data it collects are accurate and complete. As NTIA begins the arduous task of identifying 500 MHz of spectrum that can be repurposed for broadband services, incomplete or inaccurate data might adversely impact NTIA’s ability to make sound decisions regarding the current and future spectrum needs of agencies. To facilitate the effective governmentwide management of federal spectrum use, the Assistant Secretary of Commerce for Communications and Information should take the following actions: To ensure NTIA’s previous efforts to develop a federal strategic plan are not diminished, develop an updated plan that includes key elements of a strategic plan, as well as information on how spectrum is being used across the federal government, opportunities to increase efficient use of federally allocated spectrum and infrastructure, an assessment of future spectrum needs, and plans to incorporate these needs in the frequency assignment, equipment certification, and review processes. To help ensure federal agencies are managing current and future spectrum assignments efficiently, in consultation with IRAC, examine the 5-year assignment review processes and consider best practices to determine if the current approach for collecting and validating data from federal agencies can be streamlined or improved. To provide the assurance that accurate and reliable data on federal spectrum use are collected, take interim steps to establish internal controls for management oversight of the accuracy and completeness of currently reported agency data. In developing the new Federal Spectrum Management System, incorporate adequate internal controls for validating the accuracy of agency-reported information submitted during the assignment, certification, and frequency assignment review processes. We provided a draft of this report to the Department of Commerce for its review and comment. Commerce provided written comments, which are reprinted in appendix IV. In commenting on the draft report, Commerce noted that as the spectrum manager for federal users, NTIA has several spectrum management duties, such as fulfilling federal agency spectrum requirements, preventing interference among federal users, and undertaking other spectrum-related assignments or initiatives related to federally assigned spectrum. According to Commerce, given funding limitations and resource constraints, NTIA must determine how to prioritize its various spectrum- related responsibilities without impairing its primary mission of responding to agencies’ spectrum assignment requests in a timely manner. With respect to our recommendations, Commerce concurred with one and partially concurred with the other two. Specifically, Commerce concurred with our recommendation to examine the 5-year assignment review processes and consider best practices to determine if the current approach can be improved. Commerce stated that NTIA, in consultation with IRAC, would review the current assignment process with agencies to determine what improvements could be implemented. Commerce partially concurred with our recommendation to develop an updated strategic plan, stating that NTIA will have to weigh updating strategic plans against other spectrum management needs and directives and determine priorities. Commerce agreed that key elements of strategic planning are central to NTIA’s work, but stressed that given funding limitations, NTIA must consider our recommendation in light of its other spectrum-related obligations and fundamental spectrum mission. We recognize that NTIA has been tasked with responding to other spectrum management directives, but lacking an overall strategic vision, NTIA cannot ensure that its spectrum management decisions reflect the overall best use of federally allocated spectrum. Moreover, without an understanding of how spectrum is being used across the federal government, NTIA cannot ensure that spectrum is being used efficiently by federal agencies or that spectrum management decisions will meet the current and future needs of the agencies, as well as the federal government as a whole. We believe a strategic plan is a key element for NTIA to respond to recent directives from the President regarding repurposing spectrum assigned to federal agencies for commercial broadband. Commerce also partially concurred with our recommendation related to establishing internal controls for management oversight of currently reported agency data, noting its concurrence to the extent that such controls could be adopted with existing and anticipated resources. Commerce stated that NTIA would take steps to establish internal controls for federal spectrum use data and work with agencies to determine what new processes could be implemented that would lead to more accurate and reliable data, including the establishment of procedures for agency validation of submitted data. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and the Secretary of Commerce. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Contact information and major contributors to this report are listed on appendix V. This report focuses on the federal use of spectrum and examines (1) the extent to which the National Telecommunications and Information Administration’s (NTIA) spectrum management oversight and policy addresses governmentwide spectrum needs, (2) how federal agencies are using assigned spectrum and the extent to which they manage their spectrum use, and (3) what steps NTIA and the federal agencies have taken to meet the requirements and expectations of the National Broadband Plan and presidential memorandum to repurpose spectrum for commercial broadband and what challenges these efforts face. To determine the extent to which NTIA’s spectrum management oversight and policy addresses governmentwide spectrum needs, we examined documents, consulted relevant spectrum literature, and conducted interviews. Specifically, we reviewed NTIA’s Manual of Regulations and Procedures for the Federal Radio Frequency Management (commonly referred to as the Redbook) and other documentation of NTIA’s current processes, policies, and procedures to determine (1) NTIA’s legal authorities for managing federal users of spectrum, (2) how NTIA works with federal agencies to manage spectrum, (3) how NTIA collects data on federal agency spectrum assignments and usage, (4) limitations, if any, with NTIA’s current procedures for collecting data on federal agency spectrum assignments and usage, and (5) NTIA’s actions, if any, to address these limitations. We also reviewed NTIA’s data collection procedures and policies to ensure the data reliability of information contained in the Government Master File (GMF) database. In addition, we interviewed representatives from NTIA’s Office of Spectrum Management to gather information about their spectrum management policies and procedures. We also interviewed or obtained written comments from a variety of experts and industry stakeholders, including academics, industry representatives, and think-tank organizations (as shown in table 5) to obtain their views on options available for increasing the efficiency of federal spectrum use and management and associated tradeoffs. We selected the experts and industry stakeholders to interview based on prior published literature, stakeholders’ recognition and affiliation with spectrum management industry, and NTIA and other stakeholders’ recommendations. Finally, we conducted a literature review of spectrum studies. Our literature search covered studies published from 2005 onward and was largely drawn from major electronic databases in telecommunications, academic, economics, and other fields (e.g., SNL Kagan, EconLit, Academic OneFile, ProQuest, and other databases) and from our past work on spectrum-related issues. We used the studies obtained from this literature review to obtain background information on spectrum issues. To identify how federal agencies use assigned spectrum and the extent to which agencies manage their spectrum use we conducted a Web-based survey of all 19 Interdepartment Radio Advisory Committee (IRAC) federal agency representatives. We surveyed federal agencies on the IRAC because these agencies collectively hold over 90 percent of all federally assigned spectrum. The survey was conducted from November 1, 2010, to January 21, 2011. The survey included questions on (1) how federal agencies use spectrum assignments; (2) federal agency interaction with NTIA; (3) federal agencies’ spectrum management policies and procedures; (4) the extent to which federal agencies share spectrum with other users and use of commercial services; and (5) federal agencies’ views on the extent to which agencies have the resources and information they need to manage their spectrum. The results of our survey can be found in appendix III. We received completed responses from 18 of the 19 IRAC representatives, for a 95 percent response rate. We did not receive a completed survey from the Department of State IRAC representative despite our multiple attempts to obtain the information. Because we selected a nonprobability sample of federal agencies with assigned spectrum to survey, the information we obtained from the survey may not be generalized to all federal agencies with assigned spectrum. However, because the IRAC member agencies that we included in our sample survey hold the vast majority of all federally assigned spectrum, the information we gathered from these agencies provided us with a general understanding of federal agencies’ spectrum management policies. In addition, we took steps in the development of the survey, the data collection, and the data analysis to minimize nonsampling errors. For instance, a survey specialist designed the survey and the draft survey was pre-tested with IRAC representatives from three federal agencies. We conducted these pre-tests to ensure that (1) the questions and possible responses were clear and thorough, (2) terminology was used correctly, (3) questions did not place an undue burden on the respondents, (4) the information was feasible to obtain, and (5) the questionnaire was comprehensive and unbiased. On the basis of the feedback from the three pre-tests we conducted, we made changes to the content and format of the survey questions. To supplement data obtained from the survey and to gather in-depth information on the roles and responsibilities of federal agencies in managing their assigned spectrum, we obtained documents from and conducted interviews with a sample of federal agencies to provide detailed examples of how federal agencies are managing their spectrum. We prepared comprehensive profiles for each of these agencies which included data from our IRAC survey, our review of federal agency planning documents including federal agencies’ spectrum management policies and procedures and strategic spectrum plans, other literature, and structured interviews with spectrum management officials at selected federal agencies. The agencies we met with included the Department of Defense, Department of Homeland Security, Department of Labor, Environmental Protection Agency, National Oceanic and Atmospheric Administration, the U.S. Coast Guard, Federal Aviation Administration, Health and Human Services, Housing and Urban Development, and the Department of the Treasury. We selected federal agencies for our comprehensive profiles to achieve a mix of the following characteristics: large spectrum holdings (more than 5,000 assignments) and small spectrum holdings (less than 1,000 assignments); IRAC and non-IRAC member agencies to ensure that we had representative views from both groups; and assignments located in different spectrum bands and used for different mission needs. We also consulted internal stakeholders, experts, associations, and NTIA officials to assist us in identifying potential agencies to interview. Although using these criteria allowed us to obtain information from a diverse mix of federal agencies, the findings from our in-depth profiles cannot be generalized to all federal agencies because they were selected as part of a non-probability sample. To determine what steps NTIA and federal agencies have taken to meet the requirements and expectations of the June 28, 2010, presidential memorandum and what challenges these efforts will face, we reviewed pertinent documents related to their efforts, such as NTIA’s Assessment of Spectrum Bands That Could Possibly be Repurposed for Wireless Broadband (referred to as the Fast Track Evaluation) and 10-Year Plan. We also conducted interviews with NTIA and federal agency officials. Through our interviews, we collected up to date information on actions being taken to make spectrum available for wireless broadband including information on what criteria NTIA is using to make these decisions, how NTIA and federal agencies are collaborating on identifying spectrum, and what potential challenges they may face in reallocating federal spectrum. We also contacted four wireless service providers to obtain their viewpoints and opinions on (1) NTIA’s process and methodology for identifying additional spectrum to be made available for commercial broadband use, (2) the level of private sector demand for the spectrum identified by NTIA, and (3) the potential value of spectrum that NTIA has identified for analysis as part of its Fast Track Evaluation and 10-Year Plan. We conducted this performance audit from May 2010 to April 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Program reactivated in Jan. 2008. The Federal Spectrum Management System will utilize advanced information technology to develop a Web-based process for preparing and processing applications for system certification and frequency assignments. It will consolidate existing paper- based and multiple software systems, including Spectrum XXI and El-Cid. The questions we asked in our survey of IRAC agencies are shown below. Our survey was comprised of closed- and open-ended questions. In this appendix, we include all the survey questions and aggregate results of responses to the closed-ended questions; we do not provide information on responses provided to the open-ended questions. For a more detailed discussion of our survey methodology see appendix I. 1. Which Interdepartment Radio Advisory Committee (IRAC) agency do you represent? 2. What component agency or department do you work for? 3. What is your job title? 4. Please briefly describe your key responsibilities as they relate to spectrum management. 5. Do you have other responsibilities in addition to spectrum management? If so, please describe. 6. How long have you been working in federal spectrum management? (Please include in your estimate experience throughout your career not just in your current position) Federal Agency Frequency Assignments: 7. Currently, how many frequency assignments in each of the following frequency band ranges does the IRAC agency you represent have? 8. For what general purpose does the IRAC agency you represent use spectrum assigned in the 300 MHz to 3 GHz range? If you do not have any spectrum assigned in the 300 MHz to 3 GHz range, please skip toQ9 9. For the IRAC agency you represent, which of the following usage categories has your assigned spectrum been designated? (Select one for each row.) The next series of questions asks about the types of technologies your agency operates within specific spectrum band ranges. 10. For the IRAC agency you represent, please indicate whether your agency operates fixed microwave systems within any of these frequency ranges. (select one for each row) 11. For the IRAC agency you represent, please indicate whether your agency operates fixed transportable systems within any of these frequency ranges. (select one for each row) 12. For the IRAC agency you represent, please indicate whether your agency operates land mobile radio systems within any of these frequency ranges. select one for each row) 13. For the IRAC agency you represent, please indicate whether your agency operates maritime mobile radio systems within any of these frequency ranges. (select one for each row) 14. For the IRAC agency you represent, please indicate whether your agency operates fixed microwave point to point radio systems within any of these frequency ranges. (select one for each row) 15. For the IRAC agency you represent, please indicate whether your agency operates digital microwave systems within any of these frequency ranges. (select one for each row) 16. For the IRAC agency you represent, please indicate whether your agency operates satellite systems within any of these frequency ranges. (select one for each row) 17. For the IRAC agency you represent, what is/was the federal agency’s 18. For the IRAC agency you represent, how much did you pay NTIA in administrative fees for each of the following years? 19. In general, how satisfied or dissatisfied is the IRAC agency you represent with the following resources available at your agency to manage spectrum? (select one for each row) 20. What additional resources, if any, would the IRAC agency you represent like to have to manage your spectrum? NTIA Guidance and Coordination: 21. Excluding the guidance you received from NTIA’s Manual of Regulations and Procedures for Federal Radio Frequency Management (the Redbook), how satisfied or dissatisfied are you with the current quality of the other NTIA guidance you receive to manage your federal spectrum? (select one for each row) 22. To what extent, if at all, does the IRAC agency you represent coordinate with NTIA on the following spectrum management issues? (select one for each row) 23. What comments or concerns, if any, do you have with NTIA’s efforts to identify and make available 500 MHz of spectrum suitable for both mobile and fixed wireless broadband use? Agency Management and Planning Processes and Procedures: 24. Does the IRAC agency you represent have internal policies, protocols, or procedures in place to complete the following spectrum management activities. (select one for each row) 25. Within the last 5 years, how many applications for frequency assignments has your agency submitted to the Frequency Assignment Subcommittee (FAS)? 26. Within the last 5 years, how many spectrum certification applications for major spectrum dependent systems has your agency submitted to the Spectrum Planning Subcommittee (SPS)? 27. What factors did the IRAC agency you represent consider when determining whether to classify a system as a “major spectrum dependent system” requiring a spectrum certification review? 28. To what extent does the IRAC agency you represent rely on unlicensed spectrum? (select one for each row) 29. When did the IRAC agency you represent last complete a review or analysis of your future spectrum needs? (select one for each row) 30. How much, if at all, do you see your agency’s need for spectrum increasing in the next 2-3 years? (select one for each row) 31. In your opinion, will your agency have difficulty in the future meeting its critical mission needs because of insufficient spectrum? (select one for each row) 32. What factors does the IRAC agency you represent consider when making decisions about how much spectrum the agency will need in the future? 33. On average, how often is your agency able to meet the 5-year review requirement of your agency’ spectrum frequency assignments reflected in the Government Master File Database? 34. Please describe below some of the general challenges, if any, that your agency faces in reviewing your spectrum frequency assignments reflected in the Government Master File Database, by the 5-year deadline as required by NTIA: 35. During the last year, approximately how many modifications to an existing spectrum frequency assignment did the IRAC agency you represent make? 36. Please describe below the general reasons for why your agency modified an existing spectrum frequency assignment during the last year: 37. During the last year, approximately how many deletions to an existing spectrum frequency assignment did the IRAC agency you represent make? 38. Please describe below the general reasons for why your agency deleted an existing spectrum frequency assignment during the last year: 39. Does the IRAC agency you represent currently conduct measurements of any of the following types of usage? (select one for each row) 40. Which of the following reasons explains why your agency does not conduct spectrum usage measurements? (select one for each row) Spectrum sharing and use of commercial services: 41. Does the IRAC agency you represent currently share spectrum with any of the following users? (select one for each row) 42. If your agency shares spectrum, please provide examples. 43. If your agency shares spectrum, how much of an influence, if any, were the following factors in the agency’s decision to share spectrum? (select one for each row) 44. Does the IRAC agency you represent, utilize any of the following spectrum sharing technologies? (select one for each row) 45. Please describe below the challenges, if any, impact your agency’s ability to use technologies that promote spectrum sharing (such as software defined radios, dynamic frequency selection devices, cognitive radios, or trunked radio systems): 46. If your agency does not share spectrum, how much of an influence, if any, were the following factors in the agency’s decision to not share spectrum? (select one for each row) 47. Does the IRAC agency you represent currently rely on commercial network service providers to fulfill any of the following services for mission critical needs and/or administrative needs? (select one for each row) 48. How much of an influence, if any, were the following factors in the agency’s decision to use commercial services to provide your spectrum related needs? (select one for each row) 49. How much of an influence did the following concerns have in the agency’s decision not to use commercial network services for mission critical needs? (select one for each row) 50. If you have any additional comments or views regarding federal spectrum management issues that you’d like to share with us, please do so below. In addition to the contact named above, Sally Moino, Assistant Director; Amy Abramowitz; Tida Barakat; Richard Brown; Colin Fallon; Nick Jepson; Maria Mercado; Josh Ormond; Kelly Rubin; Andrew Stavisky; Hai Tran; and Mindi Weisenbloom made key contributions to this report.
Radio frequency spectrum enables vital wireless communications services used by the federal government, businesses, and consumers. Spectrum capacity is necessary for wireless broadband (high-speed Internet access) and broadband deployment will boost the nation's capabilities in many important areas. As the demand for spectrum continues to increase, there is concern about adequate access to meet future needs. This requested report examines (1) how the National Telecommunications and Information Administration (NTIA) is managing spectrum needs of federal agencies, (2) how federal agencies are using and managing assigned spectrum, and (3) what steps NTIA has taken to meet recent initiatives aimed at making spectrum available for broadband. GAO reviewed NTIA's spectrum management documents; surveyed the 19 federal agencies comprising the Interdepartment Radio Advisory Committee; and interviewed NTIA officials and industry and academic experts. NTIA is responsible for governmentwide federal spectrum management, but its efforts in this area have been limited. In 2003, the President directed NTIA to develop plans identifying federal and national (both federal and nonfederal) spectrum needs, and in 2008, NTIA issued the federal plan. GAO found this plan has several limitations, does not identify governmentwide spectrum needs, and does not contain key elements and best practices of strategic planning. NTIA has yet to issue the national plan. Furthermore, NTIA's primary spectrum management operations do not focus on governmentwide needs. Instead NTIA depends on agency self-evaluation of spectrum needs and focuses on interference mitigation, with limited emphasis on holistic spectrum management. Lacking a strategic vision, NTIA cannot ensure that spectrum is being used efficiently by federal agencies. Additionally, NTIA's data management system is antiquated and lacks internal controls to ensure the accuracy of agency-reported data, making it unclear if decisions about federal spectrum use are based on reliable data. NTIA is developing a new data management system, but full implementation of the system is years away. Federal agencies use spectrum for many purposes such as emergency communications and national defense, and NTIA requires the agencies to periodically evaluate their current and future spectrum needs. Agencies are supposed to ensure spectrum assignments fulfill established mission needs; however, NTIA does not have specific requirements for agencies to justify their spectrum assignments or validate data used for these evaluations. Consequently, NTIA has limited assurance that the data used to make spectrum management decisions are accurate. Federal agencies rely heavily on their program offices to obtain data for the required evaluations and often face challenges, such as resource constraints and staff turnover, when coordinating with field program staff. Given that validating spectrum assignments could require significant agency resources, it would be beneficial for NTIA to consider options for a different approach to obtain and validate critical spectrum assignment information from the agencies, such as requiring agencies to conduct site surveys or attest to the accuracy of data they submit. In response to recent initiatives, NTIA has taken steps to identify spectrum that could be made available for broadband use. First, NTIA evaluated various spectrum bands and identified 115 megahertz of spectrum that could be made available for broadband within the next 5 years based on criteria it developed. Second, NTIA developed an initial plan and timetable for evaluating and repurposing additional spectrum for broadband use in 10 years. Affected federal agencies--that is, those agencies operating devices in the spectrum bands being evaluated--encountered difficulties providing NTIA with the necessary data and analyses during the most recent evaluation. For example, according to the affected agencies, they were required to analyze and submit a significant amount of detailed impact analyses that were not readily available. Agencies will likely continue to face challenges providing such analyses to NTIA in the future as NTIA begins evaluating a larger number of spectrum bands for possible broadband use in the next 10 years. NTIA should develop an updated strategic plan, examine its assignment review processes to determine if the current approach can be improved, and establish internal controls to ensure the accuracy of agency-reported data. The Department of Commerce concurred with GAO's recommendation to examine the review processes and, citing competing priorities, partially concurred with the remaining two.
With the passage of ATSA in November 2001, TSA assumed from FAA the majority of the responsibility for securing the commercial aviation system. Under ATSA, TSA is responsible for ensuring that all baggage is properly screened for explosives at airports in the United States where screening is required, and for the procurement, installation, and maintenance of explosive detection systems used to screen checked baggage for explosives. ATSA required that TSA screen 100 percent of checked baggage using explosive detection systems by December 31, 2002. As it became apparent that certain airports would not meet the December 2002 deadline to screen 100 percent of checked baggage for explosives, the Homeland Security Act of 2002 in effect extended the deadline to December 31, 2003, for noncompliant airports. Prior to the passage of ATSA in November 2001, only limited screening of checked baggage for explosives occurred. When this screening took place, air carriers had operational responsibility for conducting the screening, while FAA maintained oversight responsibility. With the passage of ATSA, TSA assumed operational responsibility from air carriers for screening checked baggage for explosives. Airport operators and air carriers continued to be responsible for processing and transporting passenger checked baggage from the check-in counter to the airplane. Explosive detection systems include EDS and ETD machines (see figs. 1 and 2). EDS machines use computer-aided tomography X-rays adapted from the medical field to automatically recognize the characteristic signatures of threat explosives. By taking the equivalent of hundreds of X-ray pictures of a bag from different angles, the EDS machine examines the objects inside of the baggage to identify characteristic signatures of threat explosives. TSA has certified, procured, and deployed EDS machines manufactured by two companies. ETD machines work by detecting vapors and residues of explosives. Human operators collect samples by rubbing bags with swabs, which are chemically analyzed to identify any traces of explosive materials. ETD is used both for primary, or the initial, screening of checked baggage, as well as secondary screening, which resolves alarms from EDS machines that indicate the possible presence of explosives inside a bag. TSA has certified, procured, and deployed ETD machines from three manufacturers. The operational processes for conducting screening of checked baggage for explosives using ETD and EDS machines differ. Specifically, the ETD screening process requires the screener to manually screen checked baggage by (1) swabbing an area of, or item in, the checked bag and (2) placing the swab in the ETD machine. The ETD machine then evaluates the sample on the swab to detect trace amounts of explosive residue. If these steps are not conducted correctly, the test may fail to detect explosives that are present. Since the first steps of this process require screeners to collect explosive particles, they are vulnerable to human error. In contrast, when using EDS machines as the primary means of detection, the screening is automated and the machine either alarms indicating the possible presence of explosives or does not alarm without screener involvement. As we reported in February 2004, to initially deploy EDS and ETD equipment to screen 100 percent of checked baggage for explosives, TSA implemented interim airport lobby solutions and in-line EDS baggage screening systems. The interim lobby solutions involved placing stand- alone EDS and ETD machines in the nation’s airports, most often in airport lobbies or baggage makeup areas where baggage is sorted for loading onto aircraft. For EDS in a stand-alone mode (not integrated with airport’s or air carrier’s baggage conveyor system) and ETD, TSA screeners are responsible for obtaining the passengers’ checked baggage from either the passenger or the air carrier, lifting the bags onto and off of EDS machines or ETD tables, using TSA protocols to appropriately screen the bags, and returning the cleared bags to the air carriers to be loaded onto departing aircraft. In addition to installing stand-alone EDS and ETD machines in airport lobbies and baggage makeup areas, TSA collaborated with some airport operators and air carriers to install integrated in-line EDS baggage screening systems within their baggage conveyor systems. While each in- line baggage screening system is unique, these systems generally operate in a similar manner, as shown in figure 3. Typically, in-line systems involve checked baggage undergoing automated screening while on a conveyor belt that sorts and transports baggage to the proper location for its ultimate loading onto an aircraft. During this automated process, all checked baggage on the conveyor belt passes through EDS machines where the bags are screened for explosives. If no explosives are detected during this primary screening, the bag continues forward on the main conveyor belt to be loaded onto the aircraft. If an EDS machine alarms, indicating the possibility of explosives, TSA screeners, by reviewing computer-generated images of the inside of the bag, attempt to determine whether a suspect item is actually a threat. If the screener determines that the suspect item is not a threat, the cleared bag continues on the conveyor belt system to be loaded onto the aircraft. If the screener is unable to make this determination, the bag is diverted from the main conveyor belt into an area where it receives a secondary screening in which the bag is opened and the contents of the bag are screened by a screener using an ETD machine and physical inspection. If the bag successfully clears secondary screening, it is placed on the main conveyor belt system to be loaded onto the aircraft. If the bag tests positive for explosives during secondary screening, TSA screeners are required to notify the appropriate officials. Both airports and the federal government have cooperated to jointly fund the installation of in-line EDS baggage screening systems. The federal government has used three funding mechanisms to modify airport facilities to install in-line EDS systems—LOIs, other transaction agreements, and Airport Improvement Program funds from the FAA. In 2003, Congress authorized TSA to issue LOIs for airport modifications related to the installation of in-line baggage screening systems. When an LOI is established to provide multiyear funding for a project, the airport operator is responsible for providing—up front—the total funding needed to complete the project, even though the LOI is not a binding commitment of federal funds. Work proceeds with the understanding that TSA will, if sufficient funding is appropriated, reimburse the airport operator for a percentage of the facility modification costs. Congress initially mandated a 75 percent federal government cost-share for LOIs in February 2003, but in December of that year it increased the cost-share to 90 percent. However, the fiscal year 2005 DHS Appropriations Act subsequently re-established the federal government cost-share at 75 percent for fiscal year 2005. Also, the President’s fiscal year 2006 budget request for TSA proposes to maintain the 75 percent federal government cost share for projects funded by LOIs at large and medium airports. TSA also uses other transaction agreements, which are administrative vehicles used by TSA to directly fund airport operators engaged, or planning to engage, in smaller in-line airport modification projects without undertaking a long-term commitment. These transactions, which can take many forms and are generally not required to comply with federal laws and regulations that apply to contracts, grants, or cooperative agreements, enable the federal government and others entering into these agreements to freely negotiate provisions that are mutually agreeable. In addition, airports have utilized Airport Improvement Program grants, which are awarded by the Secretary of Transportation for airport planning and development to maintain a safe and efficient nationwide system of public airports and for limited aviation security purposes. Some airport operators used the Airport Improvement Program in fiscal years 2002 and 2003 to fund facility modifications needed to accommodate installing in- line systems. However, provisions of ATSA and the Vision 100—Century of Aviation Reauthorization Act (Vision 100), as well as fiscal years 2004 and 2005 appropriations language, have limited the future availability of the Airport Improvement Program to fund in-line systems. Since its inception in November 2001 through September 2004, TSA obligated about $2.5 billion (93 percent) of the approximately $2.7 billion it budgeted for fiscal years 2002 through 2004 for the procurement and installation of EDS and ETD machines to screen checked baggage for explosives and to modify airport facilities to accommodate this equipment. Although TSA made significant progress in fielding this equipment, TSA used most of the $2.5 billion to design, develop, and deploy interim lobby screening solutions rather than install more permanent in-line EDS baggage screening systems. TSA employed these as interim solutions in order to meet the congressional deadline for screening all checked baggage for explosives because of the significant costs required to install in-line systems and the need to reconfigure many airports’ baggage conveyor systems to accommodate the equipment. TSA officials also stated that they did not have time to conduct the planning needed or make airport modifications required for longer-term and more streamlined baggage screening operations. However, these interim lobby screening solutions used by TSA resulted in operational inefficiencies and additional security risks. Specifically, TSA’s use of stand-alone EDS and ETD machines required a greater number of screener staff and resulted in screening fewer bags for explosives per hour, as compared with using EDS machines in-line with baggage conveyor systems. Also, screening with ETD machines, as is the case for more than 300 airports, is more labor- intensive and less efficient than screening using the EDS process. TSA officials also raised concerns about the possible security risks caused by baggage screening equipment being located in airport lobbies—causing overcrowding due to passengers waiting to have their bags screened. TSA used most of the airport modification and equipment procurement and installation funds to deploy interim lobby screening solutions at more than 400 airports to provide the means for screening all checked baggage for explosives as mandated by the Congress. As shown in table 1, the Congress earmarked about $1.5 billion of the $2.7 billion budgeted amount specifically to install EDS and ETD equipment, and to modify and prepare airport facilities to incorporate the use of this equipment for screening checked baggage for explosives. Congress earmarked and TSA budgeted the remaining $1.2 billion for the procurement of EDS and ETD machines. As of the end of fiscal year 2004, TSA used about one-half of the $2.5 billion that it had obligated to modify airport facilities and to install EDS and ETD machines, and the remaining half primarily to procure EDS and ETD machines. As of September 30, 2004, TSA had obligated approximately $1.3 billion of the approximately $1.5 billion that had been earmarked for airport modifications and the installation of EDS and ETD equipment. As shown in table 2, TSA had used about $885 million (about 68 percent) of these obligated funds for the general deployment and installation of EDS and ETD equipment at various airports as part of interim lobby solutions to quickly install checked baggage screening equipment. Also included in this amount are funds that TSA used for installing interim partial in-line baggage screening systems at some airports. In general, these systems were for sections of an airport, were not fully integrated into the airport’s baggage handling system, and most often were temporary until a permanent in-line system could be installed. For example, TSA awarded the Port of Seattle about $9 million for the construction of interim partial in-line systems and modification of the baggage handling systems serving four airlines at the Seattle-Tacoma International Airport. These interim partial in-line systems, which are not fully integrated with the baggage handling systems, will be replaced by permanent in-line baggage screening systems that will be fully integrated with the airport’s baggage handling systems by March 2007. Most of the remaining airport modification and equipment installation obligations are being used by TSA for work related to the permanent in-line integration of EDS baggage screening equipment into airportwide or individual terminal baggage conveyor systems at 33 airports. See appendix III for a listing of the 33 airports having in-line baggage screening systems installed and the source of TSA funding for the in-line systems. TSA contracted with Boeing Service Company in June 2002 to be the prime contractor for deploying EDS and ETD equipment at the nation’s airports. This effort involved designing and implementing airport facility modifications for EDS and ETD equipment, such as new construction, infrastructure reinforcement, and modification of electrical systems required to install the EDS and ETD equipment. Originally, the period of performance for this contract was to expire on December 31, 2002. However, TSA extended the contract’s period of performance in order for Boeing to perform activities associated with installing interim lobby solutions to help airports meet or to maintain compliance to screen 100 percent of checked baggage with explosive detection systems. These contract extensions have resulted in a $486.3 million increase in TSA obligations against this contract for work related to airport modifications and EDS and ETD installation from $372.6 million in fiscal year 2002 to $858.8 million as of September 30, 2004. Boeing had expended most (98 percent) of these funds for interim lobby screening solutions. As of September 30, 2004, TSA had obligated almost 100 percent of the approximately $1.2 billion that had been budgeted or earmarked for procurement of EDS and ETD machines. As shown in table 3, about 80 percent of these funds has been obligated for procuring EDS machines, with most of the remaining funding being obligated for procuring ETD machines. Table 4 summarizes the location of EDS and ETD equipment at the nation’s airports by airport category, based on a June 2004 TSA inventory listing. The number of machines shown in table 4 includes EDS and ETD machines procured by both TSA and FAA prior to and during the establishment of TSA. Although TSA made significant progress in fielding EDS and ETD equipment to the nation’s airports to screen checked baggage for explosives, as mandated by Congress, TSA primarily used this equipment as part of interim lobby solutions to screen checked baggage for explosives, rather than the permanent integration of EDS machines in-line with airport baggage conveyor systems. TSA fielded most of the EDS and ETD machines needed to screen checked baggage for explosives to the nation’s over 400 airports by the congressionally mandated date of December 2003 (extended from the original deadline of December 2002), despite limited time to deploy the equipment and some of the equipment not being available when needed. In 1996, FAA, the organization then responsible for the procurement of checked baggage screening equipment, established a long-term goal of fielding explosive detection systems at all airports within 18 years—by 2014. As of June 2002, we reported that FAA had fielded 200 EDS and 200 ETD systems to 56 airports. In about two and one-half years following the mandate to screen all checked baggage for explosives, TSA’s deployment of equipment resulted in 1,228 EDS machines and 7,146 ETD machines being available in over 400 airports, as shown in table 4. Initially, EDS manufacturers were unable to produce and deliver the number of machines needed by TSA, and TSA determined that a mix of EDS and ETD technologies would provide an efficient and effective means of passenger protection. During our site visits to 22 category X, I, and II airports, we observed that in most cases, TSA used stand-alone EDS machines and ETD machines as the primary method for screening checked baggage. Generally, this equipment was located in airport lobbies and in baggage makeup areas. In addition, in our survey of 155 federal security directors, we asked the directors to estimate, for the 263 airports included in the survey, the approximate percentage of checked baggage that was screened on or around February 29, 2004, using EDS, ETD, or other approved alternatives for screening baggage such as positive passenger bag match or canine searches. As shown in table 5, the directors reported that for 130 large to medium-sized airports in our survey (21, 60, and 49 category X, I, and II airports, respectively), most of the checked baggage was screened using stand-alone EDS or ETD machines. The average percentage of checked baggage reported as screened using EDS machines at airports with partial or full in-line EDS capability ranged from 4 percent for category II airports to 11 percent for category X airports. In addition, the directors reported that ETD machines were used to screen checked baggage 93 to 99 percent of the time at category III and IV airports, respectively. TSA’s interim solution of using stand-alone EDS and ETD machines as the primary method to screen checked baggage for explosives led to operational inefficiencies including (1) the increased use of screener staff, (2) a lower baggage throughput rate per hour for screening baggage for explosives, and (3) an increase in on-the-job injuries. Further, at many airports, TSA’s placement of the minivan-sized stand-alone EDS machines and ETD machines in airport lobbies at times resulted in passenger crowding, which presented unsafe conditions and may have added security risks for passengers and airport workers. Stand-alone EDS and ETD machines are both labor- and time-intensive to operate since each bag must be physically carried to an EDS or ETD machine for screening and then moved back to the baggage conveyor system prior to being loaded onto an aircraft. With an in-line EDS system, checked baggage is screened within an airport’s baggage conveyor system, eliminating the need for a baggage screener or other personnel to physically transport the baggage from the check-in point to the EDS machine for screening and then to the airport baggage conveyor system. Further, according to TSA officials, ETD machines and stand-alone EDS machines are less efficient in the number of checked bags that can be screened per hour per machine than are EDS machines that are integrated in-line with the airport baggage conveyor systems. As shown in table 6, as of October 2003, TSA estimated that the number of checked bags screened per hour could more than double when EDS machines were placed in-line versus being used in a stand-alone mode. According to a senior TSA official in the Office of Security Technology, these throughput numbers could change as TSA gains greater operational experience. In January 2004, TSA, in support of its planning, budgeting, and acquisition of security screening equipment, reported to the Office of Management and Budget (OMB) that the efficiency benefits of in-line rather than stand- alone EDS are significant, particularly with regard to bags per hour screened and the number of TSA screeners required to operate the equipment. According to TSA officials, at that time, a typical lobby-based screening unit consisting of a stand-alone EDS machine with three ETD machines had a baggage throughput of 376 bags per hour with a staffing requirement of 19 screeners. In contrast, TSA estimated that approximately 425 bags per hour could be screened by in-line EDS machines with a staffing requirement of 4.25 screeners. In order to achieve the higher throughput rates and reduce the number of screener staff needed to operate in-line baggage screening systems, TSA (1) uses a screening procedure known as “on-screen alarm resolution” and (2) networks multiple in-line EDS machines together, referred to as “multiplexing,” so that the computer-generated images of bags from these machines are sent to a central location where TSA screeners can monitor the images of suspect bags centrally from several machines using the on- screen alarm resolution procedure. When an EDS machine alarms, indicating the possibility that explosive material may be contained in the bag, the on-screen alarm resolution procedure allows screeners to examine computer-generated images of the inside of a bag to determine if suspect items identified by the EDS machines are in fact suspicious. If a screener, by viewing these images, is able to determine that the suspect item or items identified by the EDS machine are in fact harmless, the screener is allowed to clear the bag, and it is sent to the airline baggage makeup area for loading onto the aircraft. If the screener is not able to make the determination that the bag does not contain suspicious objects, the bag is sent to a secondary screening room where the bag is further examined by a screener. In secondary screening, the screener opens the bag and examines the suspect item or items, and usually swabs the items to collect a sample for analysis using an ETD machine. TSA also uses this on-screen alarm resolution procedure with stand-alone EDS machines. A TSA official estimated that the on-screen alarm resolution procedure with in-line EDS baggage screening systems will enable TSA to reduce by 40 to 60 percent the number of bags requiring the more labor-intensive secondary screening using ETD machines. In estimating the potential savings in staffing requirements, TSA officials stated that they expect to achieve a 20 to 25 percent savings because of reductions in the number of staff needed to screen bags using ETD to resolve alarms from in-line EDS machines. TSA also reported that because procedures for using stand-alone EDS and ETD machines require screeners to lift heavy baggage onto and off of the machines, the interim lobby screening solutions used by TSA led to significant numbers of on-the-job injuries. In addition, in responding to our survey about 263 airports, numerous federal security directors reported that on-the-job injuries related to lifting heavy baggage onto or off the EDS and ETD machines were a significant concern at the airports for which they were responsible. Specifically, these federal security directors reported that on-the-job injuries caused by lifting heavy bags onto and off of EDS machines were a significant concern at 65 airports, and were a significant concern with the use of ETD machines at 110 airports. To reduce on-the-job injuries, TSA has provided training to screeners on proper lifting procedures. However, according to TSA officials, in-line EDS screening systems would significantly reduce the need for screeners to handle baggage, thus further reducing the number of on-the-job injuries being experienced by TSA baggage screeners. In addition, during our site visits to 22 large and medium-sized airports, several TSA, airport, and airline officials expressed concern regarding the security risks caused by overcrowding due to ETD and stand-alone EDS machines being located in airport lobbies. The location of the equipment resulted in less space available to accommodate passenger movement and caused congestion due to passengers having to wait in lines in public areas to have their checked baggage screened. TSA headquarters officials also reported that large groups of people congregating in crowded airport lobbies, as shown in figure 4, increases security risks by creating a potential target for terrorists. The TSA officials noted that crowded airport lobbies have been the scenes of terrorist attacks in the past. For example, in December 1985, four terrorists walked to the El Al ticket counter at Rome’s Leonardo DaVinci Airport and opened fire with assault rifles and grenades, killing 13 and wounding 75. On that same day, three terrorists killed three people and wounded 30 others at Vienna International Airport. Airport operators and TSA are taking actions to install in-line EDS baggage screening systems because of the expected benefits of these systems. However, airport operators and TSA have made limited progress in installing in-line baggage screening systems on a large-scale basis because sufficient resources have not been made available for the installation of these capital-intensive systems. To install in-line systems, airport operators and TSA work cooperatively, with airport operators responsible for the baggage conveyor systems and utilities, and TSA responsible for the EDS and ETD machines. Airport operators and TSA have also shared in the total costs—25 percent and 75 percent respectively under LOI agreements, which have been TSA’s primary method for funding in-line systems. Most airports that have installed or are planning to install in-line systems have relied on or plan to rely on some form of federal funding to help install the systems. However, as of January 2005, TSA has not used LOIs to fund the installation of in-line systems beyond nine airports. Further, TSA has not determined the total cost of installing in-line EDS baggage screening systems at airports determined to need these systems. In addition, perspectives differ regarding the appropriate role of the federal government and airport operators in funding these systems. Airport operators and TSA are taking actions to install in-line EDS baggage screening systems because of the expected benefits of these systems. Our survey of federal security directors and interviews with airport officials revealed that 86 of 130 category X, I, and II airports (66 percent) included in our survey either have, are planning to have, or are considering installing in-line EDS baggage screening systems throughout or at a portion of their airports. As shown in figure 5, as of July 2004, 12 airports had operational in-line systems airportwide or at a particular terminal or terminals, and an additional 45 airports were actively planning or constructing in-line systems. Our survey of federal security directors further revealed that an additional 33 of the 130 category X, I, and II airports we surveyed were considering developing in-line systems. In addition to the expected benefits of reduced TSA screening personnel, enhanced security, and increased baggage throughput, airport officials anticipate that they will be able to streamline their airport operations from installing in-line baggage screening systems. For example, some airport and air carrier officials we interviewed anticipate that in-line systems will result in less congestion at airline ticket counters by removing stand-alone EDS and ETD machines from crowded airport lobbies, thereby improving airline passenger flow and queuing in the terminals by not forcing passengers to wait in long lines at ticket counters to have their bags screened. Officials also believe that the installation of in-line systems would allow for airport growth because in-line EDS systems could screen checked baggage faster than stand-alone EDS and ETD systems and could be upgraded to accommodate growth in airline passenger traffic. Officials further stated that in-line systems would allow them to retain greater control and autonomy of their baggage handling systems by creating a streamlined process for moving checked baggage directly from where baggage is checked to the aircraft. While in-line EDS baggage screening systems have a number of potential benefits, the total cost to install these systems is unknown, and limited federal resources have been made available to fund these systems on a large-scale basis. In-line baggage screening systems are capital-intensive because they often require significant airport modifications, including terminal reconfigurations, new conveyor belt systems, and electrical upgrades. TSA has not determined the total cost of installing in-line EDS baggage screening systems at airports that it had determined need these systems to maintain compliance with the congressional mandate to screen all checked baggage for explosives using explosive detection systems, or to achieve more efficient and streamlined checked baggage screening operations. However, TSA and airport industry association officials have estimated that the total cost of installing in-line systems is—a rough order- of-magnitude estimate—from $3 billion to more than $5 billion. TSA officials stated that they have not conducted a detailed analysis of the costs required to install in-line EDS systems at airports because most of their efforts have been focused on deploying and maintaining a sufficient number of EDS and ETD machines to screen all checked baggage for explosives. TSA officials further stated that the estimated costs to install in-line baggage screening systems would vary greatly from airport to airport depending on the size of the airport and the extent of airport modifications that would be required to install the system. While we did not independently verify the estimates, officials from the Airports Council International-North America and American Association of Airport Executives estimated that project costs for in-line systems could range from about $2 million for a category III airport to $250 million for a category X airport. Airport operators have relied on several sources of federal funding to help pay for the planning and construction of in-line EDS baggage screening systems. We interviewed airport officials from 53 airports that either have or are in the process of planning or constructing in-line systems to determine the extent to which they have relied on or plan to rely on federal funding to install in-line systems. As shown in table 7, officials at 42 of the 53 airports we interviewed reported that they relied on the use of federal funds from the FAA Airport Improvement Program and TSA to help fund the planning and construction of these systems. However, there was no readily available information that would allow us to determine to what extent these 42 airports relied on or plan to rely on the use of federal funds for constructing or planning their in-line systems. Only one of the 53 airports completed its in-line system without first receiving federal funds for the project, while an additional 10 airports have started planning or constructing their in-line systems without receiving federal assistance or a commitment to receive federal assistance. TSA and airport operators are relying on LOI agreements as their principal method for funding the modification of airport facilities to incorporate in- line baggage screening systems. The fiscal year 2003 Consolidated Appropriations Resolution approved the use of LOIs as a vehicle to leverage federal government and industry funding to support facility modification costs for installing in-line EDS baggage screening systems. When an LOI is established to provide multiyear funding for a project, the airport operator is responsible for providing—up front—the total funding needed to complete the project, even though the LOI is not a binding commitment of federal funds. Work proceeds with the understanding that TSA will, if sufficient funding is appropriated, reimburse the airport operator for a percentage of the facility modification costs, with the airport funding the remainder of the costs. LOIs issued by TSA for in-line baggage screening systems provide for reimbursement payments over a multiple year period, contingent upon the appropriation of sufficient funding to cover such projects. As of January 2005, TSA had issued eight LOIs to reimburse nine airports for the installation of in-line EDS baggage screening systems for a total cost of $957.1 million to the federal government over 4 years. In addition, TSA officials stated that as of July 2004, they had identified 27 additional airports that they believe would benefit from receiving LOIs for in-line systems because such systems are needed to screen an increasing number of bags due to current or projected growth in passenger traffic. TSA officials stated that without such systems, these airports would not remain in compliance with the congressional mandate to screen all checked baggage using EDS and ETD. However, because TSA would not identify these 27 airports, we were unable to determine whether these airports are among the 45 airports we identified as in the process of planning or constructing in-line systems. Table 8 identifies the nine airports awarded LOI agreements, total project costs, and the cost-share for the federal government and the airport. TSA officials stated that they also use other transaction agreements as an administrative vehicle to directly fund, with no long-term commitments, airport operators for smaller in-line airport modification projects. Under these agreements, as implemented by TSA, the airport operator also provides a portion of the funding required for the modification. As of September 30, 2004, TSA had negotiated arrangements with eight airports to fund small permanent in-line projects or portions of large permanent in- line projects using other transaction agreements. These other transaction agreements range from about $640,000 to help fund the conceptual design of an in-line system for one terminal at the Dallas Fort-Worth airport to $37.5 million to help fund the design and construction of in-line systems and modification of the baggage handling systems for two terminals at the Chicago O’Hare International Airport. TSA officials stated that they would continue to use other transaction agreements to help fund smaller in-line projects. Airport operators also used the FAA’s Airport Improvement Program— grants to maintain safe and efficient airports—in fiscal years 2002 and 2003 to help fund facility modifications needed to accommodate installing in-line systems. As shown in table 7, 28 of 53 airports that reported either having constructed or planning to construct in-line systems relied on the Airport Improvement Program as their sole source of federal funding. Airport officials at over half of the 45 airports that we identified are in the process of planning or constructing in-line systems stated that they will require federal funding in order to complete the planning and construction of these in-line systems. Despite this reported need, however, the President’s fiscal year 2005 and 2006 budget requests do not provide, and the fiscal year 2005 DHS Appropriations Act does not include, funding for additional LOIs for in-line EDS baggage screening systems beyond the eight already issued. Also, the availability of federal funds from the Airport Improvement Program for future planning and construction of in-line baggage screening systems is limited. In addition, perspectives differ regarding the appropriate role of the federal government, airport operators, and air carriers in funding these capital-intensive systems. Officials at 28 of the 45 airports that we identified in figure 5 as planning or constructing in-line baggage screening systems stated that they could not or would not move forward with installing these systems without funding support from TSA. Also, in our review of correspondence to TSA regarding 26 airports’ interest in receiving LOIs, officials from half of the 26 airports stated that they would have to delay, suspend, or abandon their plans for installing in-line systems until TSA committed to funding these projects. According to TSA officials, the high cost of developing final design plans for in-line systems has resulted in airports delaying plans to install the systems until they are confident that TSA will be able to support their funding needs. Although airport officials stated that they will require federal funding to install in-line systems—and TSA officials reported that additional airports will require in-line systems to maintain compliance with the congressional mandate to screen 100 percent of checked baggage for explosives—TSA officials stated that they do not have sufficient resources in their budget to fund additional LOIs beyond the eight LOIs that have already been issued. Vision 100, among other things, provided for the creation of the Aviation Security Capital Fund to help pay for placing EDS machines in line with airport baggage handling systems. However, according to OMB officials, the President’s fiscal year 2005 budget request, which referred to the Vision 100-mandated appropriation of $250 million for the Aviation Security Capital Fund, only supported continued funding for the eight LOIs that have already been issued and does not provide resources to support new LOIs for funding the installation of in-line systems at additional airports. Further, while the fiscal year 2005 DHS Appropriations Act provides $45 million for installing explosive detection systems in addition to the $250 million from the Aviation Security Capital Fund, Congress directed, in the accompanying conference report, that the $45 million be used to assist in the continued funding of the existing eight LOIs. Further, the President’s fiscal year 2006 budget request for TSA provides approximately $240.5 million for the continued funding of the eight existing LOIs and provides no funds for new LOI agreements for in-line system integration activities. In addition, the availability of Airport Improvement Program funds for airport security-related improvements, though expanded for a time, is presently limited as a resource for the installation of in-line EDS baggage screening systems. Following the events of September 11, ATSA authorized the use of Airport Improvement Program funds for security- related enhancements through fiscal year 2002. ATSA also provided for the use of Airport Improvement Program funds to replace airport baggage handling systems and to reconfigure airport terminal baggage areas as required to install explosive detection equipment, but Vision 100 amended this provision to allow only a specific portion of Airport Improvement Program funds to be used for this purpose after December 12, 2003. Subsequent prohibitions found in the fiscal year 2004 Consolidated Appropriations Act, enacted in January 2004, and again in the fiscal year 2005 Consolidated Appropriations Act, enacted in December 2004, prohibit the use of Airport Improvement Program funds for activities related to the installation of in-line explosive detection systems. A 75 percent federal cost-share will apply to any project under an LOI for fiscal year 2005. Further, the President’s fiscal year 2006 budget request for TSA requests to maintain the 75 percent federal government cost share for projects funded by LOIs at large and medium airports. However, in testimony before Congress, an aviation industry official expressed a different perspective regarding the cost sharing between the federal government and the aviation industry for installing in-line checked baggage screening systems. Testifying in July 2004, the official said that airports contend that the cost of installing in-line systems should be met entirely by the federal government, given its direct responsibility for screening checked baggage, as established by law, in light of the national security imperative for doing so, and because of the economic efficiencies of this strategy. Although the official stated that airports have agreed to provide a local match of 10 percent of the cost of installing in-line systems at medium and large airports, as stipulated by Vision 100, he expressed opposition to the administration’s proposal, which was subsequently adopted by Congress for fiscal year 2005, to reestablish the airport’s cost- share at 25 percent. In July 2004, the National Commission on Terrorist Attacks upon the United States (the 9/11 Commission) also addressed the issue of the federal government/airport cost-share for installing EDS in-line baggage screening systems. Specifically, the commission recommended that TSA expedite the installation of in-line systems and that the aviation industry should pay its fair share of the costs associated with installing these systems, since the industry will derive many benefits from the systems. Although the 9/11 Commission recommended that the aviation industry should pay its fair share of the costs of installing in-line systems, the commission did not report what it believed the fair share to be. TSA has not conducted the analyses needed to plan for optimally deploying EDS and ETD equipment—including installing in-line EDS baggage screening systems or replacing ETD machines with stand-alone EDS machines—at the nation’s more than 400 airports to enhance security and reduce TSA staffing requirements and long-term costs. Although TSA established criteria to prioritize airport eligibility for receiving LOI funds for in-line EDS baggage screening systems, it has not conducted a systematic, prospective analysis to determine at which airports it could achieve long-term savings and enhanced security by installing in-line systems rather than continue to rely on labor-intensive stand-alone EDS and ETD machines to screen checked baggage for explosives. TSA’s retrospective analysis of the nine airports that received LOIs identified the potential for significant cost savings through the installation of in-line EDS baggage screening systems and the merit of conducting prospective analyses of other airports to provide information for future funding decisions. Further, for airports where in-line systems may not be economically justified because of the high cost of installing the systems, TSA has not conducted an analysis to determine whether it could achieve savings by making greater use of stand-alone EDS systems rather than relying on the use of more labor-intensive ETD machines. OMB has provided guidance for agencies to conduct these types of cost analyses to help build a business case for funding their programs. Moreover, Congress directed that TSA continue submitting plans for installing in-line baggage screening systems. However, TSA has not yet provided Congress with all of the information requested. In October 2003, TSA reported to OMB criteria it used to prioritize airports eligible to receive LOI funds to install in-line EDS baggage screening systems. However, TSA did not systematically determine which airports could achieve long-term savings and improved security by installing in-line systems rather than continuing to rely on labor-intensive stand-alone EDS and ETD machines to screen checked baggage for explosives. The criteria TSA established for prioritizing airport participation in the LOI program, as shown in figure 6, included airports that were not yet conducting 100 percent screening of checked baggage with EDS or ETD, and airports that would fall out of compliance with the requirement to screen checked baggage with EDS or ETD at peak load times. In July 2004, TSA officials reported that they had recently expanded these criteria to take into account additional security benefits that an in-line baggage screening system would provide an airport. Specifically, TSA officials stated that they compared airport operational needs with identified threats, based on information received from TSA’s Transportation Security Intelligence Service, to consider security needs for specific airports. TSA officials further reported that an airport’s circumstances, such as passenger load increases or decreases, may change how it is prioritized, given these criteria, and that an airport could qualify to receive LOI funding based on more than one criterion. TSA officials stated that they selected the first nine airports to receive LOIs to fund in-line baggage screening systems because, in general, they were the first to submit applications for an LOI, and they agreed to pay 25 percent of airport modification costs in accordance with the LOI requirements. TSA officials also stated that the nine airports generally met their criteria even though seven of the airports had received LOIs in July and September 2003, before the TSA’s promulgation of the criteria in October 2003. In addition to the nine airports currently receiving LOI funds, TSA officials stated that, based on their criteria, in July 2004, they identified 27 additional airports that are potential candidates for 22 future LOIs. TSA officials stated that an in-line screening system at each of these airports would provide enhanced security and efficiencies. More important, officials stated that if the 27 airports did not receive an LOI to install an in-line baggage system, these airports could fall out of compliance with the requirement to screen 100 percent of checked baggage using explosive detection systems during peak passenger traffic load periods or because of passenger load increases or new air carrier service—TSA’s second prioritization criterion shown in figure 6. Although TSA officials asserted that in July 2004, 27 airports were good candidates for in-line systems, they would not identify the 27 airports. TSA officials also did not provide the analyses they conducted to determine that these airports would fall out of compliance with the mandate to screen all checked baggage using explosive detection systems or state why these airports were more at risk than other airports for not complying with this mandate. Rather, TSA officials stated that they identified these 27 airports as good candidates for LOIs based on their day-to-day working knowledge of airports and professional judgment about airport operations. TSA officials were also unable to provide information on what the associated costs, benefits, and time frames would be for installing in-line systems at these 27 airports. Although TSA developed criteria to use as a guide for determining which airports should receive LOI funding for in-line EDS baggage screening systems, TSA has not yet conducted a systematic, prospective analysis of individual airports or groups of airports to determine at which airports installing in-line EDS systems would be cost-effective in terms of reducing long-term screening costs for the government and would improve security. Such an analysis would enable TSA to determine at which airports it would be most beneficial to invest limited federal resources for in-line systems rather than continue to rely on the stand-alone EDS and ETD machines to screen checked baggage for explosives, and it would be consistent with best practices for preparing benefit-cost analysis of government programs or projects called for by OMB Circular A-94. TSA officials stated that they have not conducted the analyses related to the installation of in-line systems at individual airports or groups of airports because they have used available staff and funding to ensure all airports have a sufficient number of EDS or ETD machines to meet the congressional mandate to screen all checked baggage with explosive detection systems. During the course of our review, in September 2004, TSA contracted for services through March 2005 to develop methodologies and criteria for assessing the effectiveness and suitability of airport screening solutions requiring significant capital investment, such as those projects associated with the LOI program. However, TSA officials could not provide us with information on how they plan to use the results of the effort in planning for the installation of in-line systems. In October 2004, the conference report accompanying the 2005 Department of Homeland Security Appropriations Act directed that TSA continue submitting quarterly reports on its plans for the installation of in- line baggage screening systems. However, TSA has not yet provided Congress with all of the information requested. Specifically, the conference report directed that TSA provide information describing, among other things, the universe of airports that could benefit from an in- line EDS baggage screening system or other physical modifications; costs associated with each airport’s project, along with a tentative timeline for award and completion; and information reflecting the anticipated cost savings—particularly personnel savings—that would be achieved through the use of in-line checked baggage systems instead of ETD and stand-alone EDS systems. TSA, directed to provide a report on September 1, 2003, and every quarter thereafter, provided two reports to Congress. However, TSA was asked to submit amended reports because the original reports lacked the requested information. As of January 2005, TSA had not submitted the amended reports or subsequent reports to Congress. The conference report further directed TSA to develop a comprehensive plan for expediting the installation of in-line EDS baggage screening systems, including the formulation of detailed budget requirements to provide for both equipment acquisition and the capital costs of installing these system configurations at airports. In addition, the December 2004, Intelligence Reform and Terrorism Prevention Act, among other things, directs TSA to develop a schedule to expedite the installation of in-line explosive detection systems. According to TSA officials, TSA recently began to conduct an analysis of alternatives to determine the best manner to acquire, deploy, and maintain EDS and ETD equipment for screening checked baggage as part of the Department of Homeland Security Investment Review process. However, according to TSA officials who prepared the review, the Investment Review Board review did not include a prioritization of which airports should receive funding for in-line systems or an analysis of screening needs at individual airports. TSA would not provide us with the baggage screening program data and analysis that it provided to the Investment Review Board for the review that occurred in late October 2004. Although TSA has not conducted a systematic analysis of cost savings and other benefits that could be derived from the installation of in-line baggage screening systems, TSA’s limited, retrospective cost-benefit analysis of in- line projects at the nine airports with signed LOI agreements found that significant savings and other benefits may be achieved through the installation of these systems. This analysis was conducted in May 2004— after the eight LOI agreements for the nine airports were signed in July and September 2003 and February 2004—to estimate potential future cost savings and other benefits that could be achieved from installing in-line systems instead of using stand-alone EDS systems. TSA estimated that in- line baggage screening systems at these airports would save the federal government $1.3 billion compared with stand-alone EDS systems and that TSA would recover its initial investment in a little over 1 year. TSA’s analysis also provided data to estimate the cost savings for each airport over the 7-year period. According to TSA’s data, federal cost savings varied from about $50 million to over $250 million at eight of the nine airports, while at one airport, there was an estimated $90 million loss. The individual airport results are described in appendix IV. According to TSA’s analysis of the nine LOI airports, in-line cost savings critically depend on how much an airport’s facilities have to be modified to accommodate the in-line configuration. Savings also depend on TSA’s costs to buy, install, and network the EDS machines; subsequent maintenance cost; and the number of screeners needed to operate the machines in-line instead of using stand-alone EDS systems. In its analysis, TSA also found that a key factor driving many of these costs is throughput—how many bags an in-line EDS system can screen per hour compared with the rate for a stand-alone system. TSA used this factor to determine how many stand-alone EDS machines could be replaced by a single in-line EDS machine while achieving the same throughput. According to TSA’s analysis, in-line EDS would reduce by 78 percent the number of TSA baggage screeners and supervisors required to screen checked baggage at these nine airports, from 6,645 to 1,477 screeners and supervisors. However, the actual number of TSA screeners and supervisor positions that could be eliminated would be dependent on the individual design and operating conditions at each airport. TSA also reported that aside from increased efficiency and lower overall costs, there were a number of qualitative benefits that in-line systems would provide over stand-alone systems, including: fewer on-the-job injuries, since there is less lifting of baggage when EDS machines are integrated into the airport’s baggage conveyor system; less lobby disruption because the stand-alone EDS and ETD machines would be removed from airport lobbies; and unbroken chain of custody of baggage because in-line systems are more secure, since the baggage handling is performed away from passengers. TSA’s retrospective analysis of these nine airports indicates the potential for cost savings through the installation of in-line EDS baggage screening systems at other airports, and it provides insights about key factors likely to influence potential cost savings from using in-line systems at other airports. This analysis also indicates the merit of conducting prospective analyses of other airports to provide information for future federal government funding decisions as required by the OMB guidance on cost- benefit analyses. This guidance describes best practices for preparing benefit-cost analysis of government programs or projects, one of which involves analyzing uncertainty. Given the diversity of airport designs and operations, TSA’s analysis could be modified to account for uncertainties in the values of some of the key factors, such as how much it will cost to modify an airport to install an in-line system. Analyzing uncertainty in this manner is consistent with OMB guidance. Appendix IV illustrates how analyzing uncertainty in TSA’s cost estimates can help identify which cost factors to focus on when determining the appropriateness of installing EDS baggage screening systems for a particular airport. TSA also has not systematically analyzed which airports could benefit from the implementation of additional stand-alone EDS systems in lieu of labor-intensive ETD systems at more than 300 airports that rely on ETD machines, and where in-line EDS systems may not be appropriate or cost- effective. More specifically, TSA has not prepared a plan that prioritizes which airports should receive EDS machines (including machines that become surplus because of the installation of in-line systems) to balance short-term installation costs with future operational savings. Furthermore, TSA has not yet determined the potential long-term operating cost savings and the short-term costs of installing the systems, which are important factors to consider in conducting analyses to determine whether airports would benefit from the installation of EDS machines. TSA officials said that they had not yet had the opportunity to develop such analyses or plans, and they did not believe that such an exercise would necessarily be an efficient use of their resources, given the fluidity of baggage screening at various airports. There is potential for TSA to benefit from the introduction of smaller stand-alone EDS machines—in terms of labor savings and added efficiencies—at some of the more than 300 airports where TSA relies on the use of ETD machines to screen checked baggage. Stand-alone EDS machines are able to screen a greater number of bags in an hour than the ETD used for primary screening while lessening reliance on screeners during the screening process. For example, TSA’s analysis showed that an ETD machine can screen 36 bags per hour, while the stand-alone EDS machines can screen 120 to 180 bags per hour. As a result, it would take three to five ETD machines to screen the same number of bags that one stand-alone EDS machine could process. In addition, greater use of the stand-alone EDS machines could reduce staffing requirements. For example, one stand-alone EDS machine would potentially require 6 to 14 fewer screeners than would be required to screen the same number of bags at a screening station with three to five ETD machines. This calculation is based on TSA estimates that 4.1 screeners are required to support each primary screening ETD machine, while one stand-alone EDS machine requires 6.75 screeners—including staff needed to operate ETD machines required to provide secondary screening. Without a plan for installing in-line EDS baggage screening systems, and for using additional stand-alone EDS systems in place of ETD machines at the nation’s airports, it is unclear how TSA will make use of new technologies for screening checked baggage for explosives, such as the smaller and faster EDS machines that may become available through TSA’s research and development programs. For example, TSA is working with private sector firms to enhance existing EDS systems and develop new screening technologies through its Phoenix project. As part of this project, in fiscal year 2003, TSA spent almost $2.4 million to develop a new computer-aided tomography explosives detection system that is smaller and lighter than systems currently deployed in airport lobbies. The new system is intended to replace systems currently in use, including larger and heavier EDS machines and ETD equipment. The smaller size of the system creates opportunities for TSA to transfer screening operations to other locations such as airport check-in counters. TSA certified this equipment in December 2004 and will pilot the machine in the field to evaluate its operational efficiency. Also, the ARGUS program was initiated in 1999 to develop EDS equipment that would cost less to build and install—even though baggage throughput may be lower—in order to provide a more uniform level of security using EDS machines at U.S. airports. TSA’s Transportation Security Laboratory has certified three varieties of these machines, though the machines have not been procured and deployed at U.S. airports. TSA has made substantial progress in installing EDS and ETD systems at the nation’s airports—mainly as part of interim lobby screening solutions—to provide the capability to screen all checked baggage for explosives, as mandated by Congress. With the objective of initially fielding this equipment largely accomplished, TSA needs to shift its focus from equipping airports with interim screening solutions to systematically planning for the more optimal deployment of checked baggage screening systems. The need for sound planning is also recognized by Congress through the Intelligence Reform and Terrorism Prevention Act of 2004 and through the fiscal year 2005 DHS Appropriations Act Conference Report, which, among other things, directs TSA to develop a comprehensive plan for expediting the installation of in-line explosive detection systems. Part of such planning should include analyzing which airports should receive federal support for in-line EDS baggage screening systems based on cost savings that could be achieved from more effective and efficient baggage screening operations and on other factors, including enhanced security. Also, for airports, where in-line systems may not be economically justified because of high investment costs, a cost effectiveness analysis could be used to determine the benefits of additional stand-alone EDS machines to screen checked baggage in place of the more labor-intensive ETD machines that are currently being used at the more than 300 airports. In addition, TSA should consider the costs and benefits of the new technologies being developed through its research and development efforts, which could provide smaller EDS machines that have the potential to reduce the costs associated with installing in-line EDS baggage screening systems or to replace ETD machines currently used as the primary method for screening. We believe that without such analyses, and without associated plans for the installation of in-line baggage screening systems and replacing stand-alone EDS machines, TSA cannot ensure that it is efficiently allocating its limited resources to maximize the effectiveness of its checked baggage screening operations. An analysis of airport baggage screening needs would also help enable TSA to determine whether expected reduced staffing costs, higher baggage throughput, and increased security would justify the significant up-front investment required to install in-line baggage screening. TSA’s retrospective analysis of nine airports installing in-line baggage screening systems with LOI funds, while limited, demonstrated that cost savings could be achieved through reduced staffing requirements for screeners and increased baggage throughput. In fact, the analysis showed that using in-line systems instead of stand-alone systems at these nine airports would save the federal government about $1 billion over 7 years and that TSA’s initial investment would be recovered in a little over 1 year. In considering airports for in-line baggage screening systems or the continued use of stand-alone EDS and ETD machines, a systematic analysis of the costs and benefits of these systems would help TSA justify the appropriate screening for a particular airport, and such planning would help support funding requests by demonstrating enhanced security, improved operational efficiencies, and cost savings to both TSA and the affected airport. In addition to identifying the most optimal baggage screening solutions at the nation’s airports, a systematic analysis of baggage screening operations and solutions—including an estimate of savings that could be achieved through the installation of in-line EDS baggage screening systems—would assist the Administration and Congress in determining the appropriate role of the federal government and aviation industry in funding capital-intensive in-line baggage screening systems. By identifying efficiencies that could be achieved for both TSA—such as savings achieved through reduced TSA staffing needs for screeners—and the airports and airlines—such as increased security due to less crowding in airport lobbies and the faster processing of baggage and passengers—the Administration and Congress would have information identifying the costs and benefits of in-line baggage screening systems for all parties involved to assist in determining an appropriate cost-share between the federal government and aviation industry in funding these systems. In developing the comprehensive plan for installing in-line EDS baggage screening systems, as directed by the fiscal year 2005 DHS Appropriation Act Conference Report, and in satisfying the requirements set forth in the Intelligence Reform and Terrorism Prevention Act of 2004, we recommend that the Secretary of the Department of Homeland Security direct the Administrator for the Transportation Security Administration to systematically assess the costs and benefits of deploying in-line baggage screening systems at airports that do not yet have in-line systems installed. As part of this assessment, the Administrator should take the following four actions: identify and prioritize the airports where the benefits—in terms of cost savings of baggage screening operations and improved security—of replacing stand-alone baggage screening systems with in-line systems are likely to exceed the costs of the systems, or the systems are needed to address security risks or related factors; consider the projected availability and costs of baggage screening equipment being developed through research and development efforts; estimate total funds needed to install in-line systems where appropriate, including the federal funds needed given different assumptions regarding the federal government and airport cost-shares for funding the in-line systems; and work collaboratively with airport operators, who are expected to share the costs and benefits of in-line systems, to collect data and prepare the analyses needed to develop plans for installing in-line systems. We also recommend that the Administrator for the Transportation Security Administration assess the feasibility, expected benefits, and costs of replacing ETD machines with stand-alone EDS machines for primary screening at those airports where in-line systems would not be either economically justified or justified for other reasons. In conducting this assessment, the Administrator should consider the projected availability and costs for screening equipment being developed through research and development efforts. We also made a recommendation to DHS addressing TSA’s protocols for screeners using ETD systems and associated screener training, which is included in the restricted versions of this report. We provided a draft of this report to DHS for review and comment. On February 18, 2005, we received written comments on the draft report, which are reproduced in appendix V. DHS generally concurred with our findings and recommendations, and agreed that efforts to implement the recommendations are critical to a successful checked baggage screening deployment program. Regarding our recommendation that TSA systematically assess the costs and benefits of deploying in-line baggage screening systems at airports that do not yet have in-line systems installed, DHS stated that TSA has initiated an analysis of deploying in-line checked baggage screening systems and is in the process of formulating criteria to use to identify those airports that would benefit from an in-line system. According to DHS, TSA believes that it can focus on approximately 40 airports that handle anywhere from 60 to 80 percent of all checked baggage nationwide. Once TSA officials have finalized the criteria and determined those airports at which in-line systems should be installed, they plan to conduct an airport-specific analysis to determine the individual costs and operational benefits. We are encouraged that TSA is proceeding with this analysis, which should provide a sound business case to justify resource allocation decisions. It is important, however, that TSA establish milestones and time frames for completing the analysis and documenting and reporting the results, such that they are available in a timely manner for DHS and congressional budget decisions. Concerning our recommendation that TSA assess the feasibility, expected benefits, and costs of replacing ETD machines with stand-alone EDS machines for primary screening at those airports where in-line systems would either not be economically justified or justified for other reasons, DHS stated that TSA has started conducting an analysis of the airports that rely on ETD machines as the primary checked baggage screening technology to identify those airports that would benefit from replacing ETDs with stand-alone EDS equipment. Again, we are pleased that TSA officials are conducting this analysis, which should provide them with the basis for optimizing the use of its EDS machines for screening checked baggage. Further, DHS stated that TSA continues to review and refine the protocols and training of all screening procedures including screening checked baggage, and are in the process of implementing the recommendations made by the DHS Inspector General regarding improved screener training and other improvements for both the passenger and checkpoint and checked baggage. TSA also provided additional technical comments on our draft report, which we have incorporated where appropriate. We will send copies of the report to the Secretary of the Department of Homeland Security, the TSA Administrator, and interested congressional committees as appropriate. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-3404, berrickc@gao.gov or Christine Fossett, Assistant Director at (202) 512-2956, fossettc@gao.gov. Key contributors to this report are listed in appendix VI. To assess efforts by the Transportation Security Administration (TSA) to screen checked baggage for explosives using explosives detection system (EDS) and explosives trace detection (ETD) equipment, we addressed the following questions: (1) How did TSA use the funds it initially budgeted to procure and install EDS and ETD systems and make associated airport modifications, and what was the impact of the initial deployment of EDS and ETD systems? (2) What actions are airports and TSA currently taking to install automated in-line EDS baggage screening systems, and what are the federal resources that have been made available to fund these systems? (3) What actions, if any, is TSA taking to plan for the optimal deployment of in-line baggage screening systems in order to ensure the efficiency, cost effectiveness, and security of its checked baggage screening operations? To determine how TSA used its funding for procuring and installing EDS and ETD systems and modifying airports, we obtained and analyzed relevant legislation and appropriate budget documents, contracts, and inventory reports from TSA related to checked baggage screening with EDS and ETD machines. We interviewed TSA officials from the Office of Budget and Performance, the Office of Acquisition, and TSA’s Security Technology Deployment Office. We also obtained and reviewed funding and contractual documents from these locations. To determine what impact the initial deployment of EDS and ETD systems had on TSA and airport operations, we conducted a literature search to obtain information on the purpose and use of explosive detection screening equipment to screen checked baggage at airports for explosives. This search identified various TSA reports, Department of Homeland Security (DHS) Inspector General reports, Congressional Research Service reports, and aviation industry reports documenting TSA’s use of this equipment for screening checked baggage. Also, we obtained and reviewed relevant documents from TSA and interviewed TSA headquarters officials from TSA’s Office of Aviation Operations, Office of Chief Counsel, Office of Technology Deployment and Maintenance, and Office of Internal Affairs and Program Review. This documentation included information on staffing requirements and the number of bags per hour that can be screened by in- line EDS systems as compared with stand-alone EDS and ETD machines. We also interviewed officials from TSA, air carriers, airports, explosive detection systems equipment manufacturers, and airport industry associations to obtain information regarding TSA’s efforts to improve checked baggage screening operations using EDS machines. Although we could not independently verify the reliability of all of this information, we compared it with other supporting documents, when available, to determine data consistency and reasonableness. Based on these efforts, we believe the information we obtained is sufficiently reliable for this report. Further, we reviewed the results from unannounced, undercover covert testing of checked baggage screening operations conducted by TSA’s Office of Internal Affairs and Program Review and questioned TSA officials about the procedures used to ensure the reliability of the covert test data. On the basis of their answers, we believe that the covert test data are sufficiently reliable for the purposes of our review. To address our second and third objectives—to determine what actions airports and TSA are taking to develop in-line EDS baggage screening systems and what resources are available for these systems; and to determine what TSA is doing to optimally deploy these systems in order to improve the efficiency, cost effectiveness, and security of its checked baggage screening operations—we obtained briefings and other documents related to the planned use and installation of in-line systems and interviewed officials from the Office of Chief Counsel and the Office of Security Technology. We also interviewed officials from the TSA’s Transportation Security Laboratory in Atlantic City, New Jersey, to discuss the agency’s efforts to examine future baggage screening technologies and the certification process for EDS and ETD equipment. We also used information related to checked baggage screening from a Web-based survey of all 155 federal security directors about 263 of the airports under their supervision. This survey is described below. We also followed up by telephone with airport officials from 70 of those airports to obtain additional information about their plans for in-line systems. These airports were selected primarily based on the responses from the federal security directors regarding whether the airport had or planned on installing in-line EDS checked baggage screening systems. In addition, GAO’s Office of General Counsel formally requested that TSA describe its means for compliance with the baggage screening requirements of the Aviation and Transportation Security Act and the Homeland Security Act of 2002, and inquired how TSA would approach its letters of intent for funding in-line checked baggage screening systems in light of changes mandated by the Vision 100—Century of Aviation Reauthorization Act. Also, to assess potential savings, we reviewed a TSA cost model showing savings expected to be achieved with in-line rather than stand-alone EDS equipment at nine airports. We assessed the model’s logic to ensure its completeness and correctness of calculations. Also, as discussed in appendix IV, we conducted a Monte Carlo simulation to: (1) illustrate sensitivity of potential cost savings of replacing stand-alone with in-line EDS systems to alternative values of key cost drivers and (2) to explore the variability in the key factors used by TSA in their model. Based on our review of TSA’s cost model, we believe that it is sufficiently reliable for the analyses we conducted and the information included in this report. In addition, in addressing all three objectives, we conducted site visits and a Web-based survey. Specifically, we conducted site visits at 22 airports (12 category X airports, 9 category I airports, and 1 category II airport) to observe airport security baggage screening procedures and discuss issues related to the baggage screening processes with TSA, airport, and airline officials. We chose these airports on the basis of one or more of the following factors: a large number of passenger boardings; the existence of an operational in-line system; whether the airport had received or requested TSA funding for an in-line system; whether the airport had begun screening all checked baggage using EDS or ETD; and the proximity to a larger airport being visited by GAO. The results from our airport visits provide examples of checked baggage screening operations and issues but cannot be generalized beyond the airports visited because we did not use statistical sampling techniques in selecting the airports. We administered a Web-based survey to all 155 federal security directors who oversee security at each of the airports falling under TSA’s jurisdiction. The questionnaire contained questions related to the status of checked baggage screening operations and planning and implementation of in-line EDS checked baggage screening systems. A GAO survey specialist designed the questionnaire in combination with other GAO staff knowledgeable about airport security issues. We conducted pretest interviews with six federal security directors to ensure that the questions were clear, concise, and comprehensive. In addition, TSA managers and an independent GAO survey specialist reviewed the questionnaire. For this Web-based survey, each federal security director received one or two airport-specific questionnaires to complete, depending on the number of airports for which he or she was responsible. Where a federal security director was responsible for more than two airports, we selected the first airport based on the federal security director’s location and the second airport to obtain a cross-section of all airports by size and geographic distribution. In all, we requested information on 265 airports. However, two airports were dropped from our initial selection because the airlines serving these airports suspended operations and TSA employees were redeployed to other airports. As a result, our sample size was reduced to 263 airports, which included all 21 category X airports, 60 category I airports, 49 category II airports, 73 category III airports, and 60 category IV airports. In that we did not use probability sampling methods to select the sample of airports, we cannot generalize our findings beyond the selected airports in these categories. We conducted this Web-based survey from late March to mid-May 2004. We received completed questionnaires from all 155 federal security directors for all 263 separate airports for which we sought information for a 100 percent response rate. We called selected survey respondents, or other TSA officials designated to respond on the respondent’s behalf, to obtain answers to key survey questions that may have been left blank, to look into situations where instructions were not followed and to investigate answers that looked suspicious or out of range. The survey results are not subject to sampling errors because all federal security directors were asked to participate in the survey and we did not use probability sampling techniques to select specific airports. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as non-sampling errors. For example, inconsistencies in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into a database or were analyzed can introduce unwanted variability in the survey results. We took steps in the development of the questionnaires, the data collection, and the data editing and analysis to minimize these non-sampling errors. Also, in that these were Web-based surveys whereby respondents entered their responses directly into our database, data entry or transcription errors were possible. In addition, all computer programs used to analyze the data were peer-reviewed and verified to ensure that the syntax was written and executed correctly. We performed our work from September 2003 through January 2005 in accordance with generally accepted government auditing standards. Certain information we obtained and analyzed regarding explosive detection technologies and their effectiveness in TSA’s checked baggage screening operations are classified or are considered by TSA to be sensitive security information. Accordingly, the results of our review of this information have been removed from this report. Appendix II: Summary of Checked Baggage Screening Legislation Established the Transportation Security Administration (TSA) as the agency responsible for security in all modes of transportation, including civil aviation Appointed federal security managers to oversee the screening of passengers and baggage at airports Deployment of federal personnel to screen all passengers and baggage at airports Mandated the screening of all checked baggage with explosive detection systems by December 31, 2002, and authorized alternative means to screen checked baggage (positive passenger bag match, manual search, canine search in combination with other means, other technology approved by TSA) where explosive detections systems are unavailable Mandated the imposition of passenger security fees (and authorized the imposition of air carrier fees, if necessary) Homeland Security Act of 2002, Pub. L. No. 107-296, 116 Stat. 2135 (Nov. 25, 2002) Consolidated Appropriations Resolution, 2003, Pub. L. No. 108-7, 117 Stat. 386 (Feb. 20, 2003) Authorized $500 million for each of fiscal years 2003 through 2007 for TSA to issue letters of intent (LOIs) to airports, with a government cost-share of 75 percent at airports with at least 0.25 percent of total passenger boardings each at all airports (90 percent at any other airport) Intelligence Reform and Terrorism Prevention Act of 2004, Pub. L. No. 108-458, 118 Stat. 3638 (Dec. 17, 2004) Requires that TSA take action to expedite the installation and use of baggage screening equipment and requires that TSA submit schedules to the Senate Committee on Commerce, Science and Transportation and the House of Representatives Committee on Transportation and Infrastructure for expediting the installation and use of in-line baggage screening equipment that estimate the impact that such equipment, facility modification, and baggage conveyor placement will have on staffing needs and levels related to aviation security and for replacing trace detection equipment with explosive detection system equipment as soon as practicable and where appropriate, within 180 days of enactment Requires the Secretary of Homeland Security, in consultation with air carriers, airport operators, and other interested parties, to submit, in conjunction with its fiscal year 2006 budget proposal, a proposed formula for cost sharing among federal, state and local governments and the private sector for the installation of in-line baggage screening systems, recommendations for defraying the costs of in-line systems, and a review of innovative financing approaches and possible cost savings associated with installing in- line systems at airports Amends 49 U.S.C. § 44923(i) by increasing the authorized appropriations for each of fiscal years 2005 through 2007 to $400 million Allows for a reimbursement period under any LOI to extend for a maximum of 10 years after issuance Funding appropriated and other key provisions 2002 Emergency Supplemental Appropriations Act for Recovery from and Response to Terrorist Attacks on the United States, Pub. L. No. 107-38, 115 Stat. 220 (Sept. 18, 2001) Department of Transportation and Related Agencies Appropriations Act, Pub. L. No. 107-87, 115 Stat. 833 (Dec. 18, 2001) Department of Defense Emergency Supplemental Appropriations for Recovery from and Response to Terrorist Attacks on the United States, 2002; Department of Defense Appropriations Act, 2002, Pub. L. No. 107-117, 115 Stat. 2230 (Jan. 10, 2002) Funds to be obligated from amounts made available in Public Law 107-38: $108.5 million to “FAA Facilities and Equipment” (available until Sept. 30, 2004) for procurement and installation of explosive detection systems $50 million to “FAA Research and Development” (available until Sept. 30, 2003), of which H.R. Conf. Rep. No. 107-350 (2001) directed $2 million for a demonstration of 100 percent positive passenger bag match technology at DCA 2002 Supplemental Appropriations Act for Further Recovery from and Response to Terrorist Attacks on the United States, Pub. L. No. 107-206, 116 Stat. 820 (Aug. 2, 2002) Consolidated Appropriations Resolution, 2003, Pub. L. No. 108-7, 117 Stat. 386 (Feb. 20, 2003) $3.0379 billion (available until expended) for screening activities, of which H.R. Conf. Rep. No. 108-10 (2003) directed $1.4159 billion for baggage screening activities: The Resolution earmarked $265 million for the physical modification of commercial service airports to install, and $174.5 million for the procurement of, checked baggage explosive detection systems The Conference Report directed $900 million for baggage screeners, $75 million for detection equipment maintenance, and $1.4 million for a checked baggage data system $235 million (available until expended) for the physical modification of commercial service airports to install checked baggage explosive detection systems $1.3187 billion (available until expended) for baggage screening activities: The act earmarked $250 million for physical modification of commercial service airports to install, and $150 million for the procurement of, checked baggage explosive detection systems. Consolidated Appropriations Act, 2004, Pub. L. No. 108-199, 118 Stat. 3 (Jan. 23, 2004) Department of Homeland Security Appropriations Act, 2005, Pub. L. No. 108- 334, 118 Stat. 1298 (Oct.18, 2004) $1.45246 billion (available until expended) for baggage screening activities: The act earmarks $180 million for procurement of, and $45 million to install, checked baggage explosive detection systems. Consolidated Appropriations Act, 2005, Pub. L. No. 108-447, 118 Stat. 2809 (Dec. 8, 2004) TSA estimated that baggage screening operations at the nine airports receiving letters of intent (LOIs) will result in a savings to the federal government of $1.26 billion over 7 years—and would recover the initial investment in 1.07 years—as a result of installing in-line rather than stand- alone EDS systems. To make these estimates, TSA made a variety of assumptions about in-line and stand-alone EDS systems, including how many bags can be processed per hour by both, how many screeners both would need, and how much it would cost to purchase, install, and operate these systems. In addition, TSA used data on how much it cost to modify these nine airports to accommodate in-line systems. In considering the accuracy of TSA’s estimates, uncertainties inherent in many of these assumptions should be considered. TSA could have analyzed uncertainty in their estimate by conducting sensitivity or other analyses to determine how variations in these assumptions would change its estimate of cost savings. Analyzing uncertainty in this way is consistent with best practices for preparing benefit-cost analysis of government programs or projects called for by OMB Circular A-94. Nonetheless, TSA’s cost model for these nine airports offers insights about key factors likely to influence potential cost savings at other airports. To illustrate taking uncertainty into account, we conducted a Monte Carlo analysis using TSA’s cost model. We found that TSA’s cost savings estimate of $3.5 million per in-line EDS machine as compared to stand- alone could range from a loss of $1.6 million to a savings of $8.3 million per machine using generalized assumptions about cost uncertainty in TSA’s model. The most important source of uncertainty causing this wide range in possible savings was the cost to modify an airport to accommodate an in-line EDS system. Variation in modification costs explained over 60 percent of the variation in potential cost savings from in-line EDS as compared to stand-alone EDS. The next most important variable, the number of bags per hour that in-line and stand-alone machines can screen per hour, each accounted for about 15 percent of the variation in cost savings. In this way, Monte Carlo analysis can offer insights on factors to focus on when determining the appropriateness of an in-line EDS baggage screening system for a particular airport. The analysis provided by TSA aggregated the nine airports to present a total estimate. Using TSA’s analysis, we were able to determine the results for each of the nine airports. Figure 7 illustrates the variation in modification costs at the nine airports TSA studied, ranging from over $14 million per in-line EDS machine at Seattle to less than $2 million for Boston and Dallas-Fort Worth. Figure 8, which shows the cost savings from in-line EDS compared to stand-alone EDS, identifies that Seattle could end up spending more for an in-line EDS system than from using stand-alone EDS machines. Further, as shown in figure 9, at Seattle the relatively large costs for upfront in-line EDS are not offset by the estimated $48 million in operation and maintenance cost savings; therefore, the in-line EDS system may be more costly than EDS stand- alone. By contrast, at Dallas-Fort Worth, the upfront costs of in-line EDS are lower than for stand-alone EDS, and there is an estimated $252 million in operation and maintenance cost savings. Therefore, the in-line EDS system at Dallas-Fort Worth may be less costly than stand-alone EDS. In addition to those named above, David Alexander, Leo Barbour, Charles Bausell Jr., Kevin Copping, Katherine Davis, Kevin Dooley, David Hooper, Lemuel Jackson, Stuart Kaufman, Noel Lance, Thomas Lombardi, Jan Montgomery, Jobenia Odum, Jean Orland, Keith Rhodes, Minette Richardson, and Mark Tremba were key contributors to this report. TSA classifies the over 400 airports in the United States that require screening into one of five categories (X, I, II, III, and IV) based on various factors, such as the total number of take-offs and landings annually, the extent to which passengers are screened at the airport, and other special security considerations. In general, category X airports have the largest number of passenger boardings and category IV airports have the smallest. TSA periodically reviews airports in each category, and, if appropriate, changes an airport’s categorization to reflect current operations. The Airport Improvement Program has provided federal grants since the passage of the Airport and Airway Improvement Act of 1982, Pub. L. No. 97-248, 96 Stat. 324. Administered by the Federal Aviation Administration, Airport Improvement Program grants have supported airport planning and development. Grants are issued to maintain and enhance airport safety, preserve existing airport infrastructure, and expand capacity and efficiency throughout the airport system. Funds obligated for the Airport Improvement Program are drawn from the Airport and Airway Trust Fund, which is supported by user fees and fuel taxes. An individual’s personal property offered to and accepted by an aircraft operator for transport, which will be inaccessible to the individual during flight. A program is cost effective if, on the basis of life cycle cost analysis of competing alternatives, it is determined to have the lowest costs expressed in present value terms for a given amount of benefits. Cost- effectiveness analysis is appropriate whenever it is unnecessary or impractical to consider the dollar value of the benefits provided by the alternatives under consideration. This is the case whenever (1) each alternative has the same annual benefits expressed in monetary terms; or (2) each alternative has the same annual affects but dollar values cannot be assigned to their benefits. A TSA certified automated device that has the ability to detect in checked baggage, the amounts, types, and configurations of explosive material specified by the TSA. An EDS machine uses computer-aided tomography to automatically measure the density of objects in baggage to determine whether the objects have the same density as explosives. The system automatically triggers an alarm when objects with high densities characteristic of explosives are detected. A device that has been certified by TSA for detecting explosive vapors and residues on objects intended to be transported aboard an aircraft. Explosives trace detection works by detecting vapors and residues of explosives. Human operators collect samples by rubbing bags with swabs, which are chemically analyzed to identify any traces of explosive materials. ETD is used both for primary screening of baggage and secondary screening to resolve alarms from EDS machines. Solutions employed by TSA to initially deploy explosive detection systems to screen 100 percent of checked baggage for explosives, until more permanent solutions could be designed and constructed. Efforts involved designing and implementing facility modifications, such as new construction, infrastructure reinforcement, and modification of electrical systems required to install the EDS and ETD equipment; and developing and administering equipment training for baggage screeners. In-line system (also known as integrated checked baggage screening system or integrated EDS system) A baggage conveyor system with incorporated EDS machines. The EDS’s baggage feed and output belts are directly connected to an airline’s or airport’s baggage belt system. The checked baggage undergoes automated screening in the EDS while on the conveyor belt system that sorts and transports baggage to the proper location for its ultimate loading on an aircraft. Baggage is introduced into the EDS without manual loading or unloading by TSA screeners. The fiscal year 2003 Consolidated Appropriations Resolution, Pub. L. No. 108-7, 117 Stat. 11, authorized an LOI program for shared federal government and aviation industry funding to support facility modification costs associated with the installation of in-line EDS baggage screening systems. The Vision 100—Century of Aviation Reauthorization Act, Pub. L. No. 108-176, 117 Stat. 2490 (2003), also authorized the use of LOIs for this purpose. EDS machines are networked together so that images from multiple EDS machines can be sent to a centralized location where screeners can resolve alarms by studying EDS generated images. When an EDS machine alarms, indicating the possibility of explosives, TSA screeners, by reviewing computer generated images of the inside of the bag, attempt to determine whether or not a suspect item or items are in fact explosive materials. If the screener is unable to make this determination, the bag is diverted from the main conveyor belt into an area where it receives a secondary screening by a screener with an ETD machine. Other transaction agreements are administrative vehicles used by TSA to directly fund airport operators for smaller in-line airport modification projects without undertaking a long-term commitment. These transactions, which undertake many forms and are generally not required to comply with Federal laws and regulations that apply to contracts, grants, and/or cooperative agreements, enable the federal government and others entering into these agreements to freely negotiate provisions that are mutually agreeable. An alternative means of screening checked baggage, conducted by the airline, which requires that the passenger be on the same aircraft as his or her checked baggage. EDS machines that are placed in terminal lobbies, curbside or in baggage makeup areas, not integrated with baggage conveyor systems as part of in- line systems. Bags screened per hour as a measure of efficiency.
Mandated to screen all checked baggage using explosive detection systems at airports by December 31, 2003, the Transportation Security Administration (TSA) deployed two types of screening equipment: explosives detection systems (EDS), which use computer-aided tomography X-rays to recognize the characteristics of explosives, and explosives trace detection (ETD) systems, which use chemical analysis to detect traces of explosive material vapors or residues. This report assesses (1) TSA's use of budgeted funds to install EDS and ETD systems and the impact of initially deploying these systems, (2) TSA and airport actions to install EDS machines in-line with baggage conveyor systems, and the federal resources made available for this purpose, and (3) actions taken by TSA to optimally deploy checked baggage screening systems. TSA has made substantial progress in installing EDS and ETD systems at the nation's more than 400 airports to provide the capability to screen all checked baggage using explosive detection systems, as mandated by Congress. However, in initially deploying EDS and ETD equipment, TSA placed stand-alone ETD and the minivan-sized EDS machines--mainly in airport lobbies--that were not integrated in-line with airport baggage conveyor systems. TSA officials stated that the agency's ability to initially install in-line systems was limited because of the high costs and the time required for airport modifications. These interim lobby solutions resulted in operational inefficiencies, including requiring a greater number of screeners, as compared with using EDS machines in-line with baggage conveyor systems. TSA and airport operators are taking actions to install in-line baggage screening systems to streamline airport and TSA operations, reduce screening costs, and enhance security. Eighty-six of the 130 airports we surveyed either have, are planning to have, or are considering installing full or partial in-line systems. However, resources have not been made available to fund these capital-intensive systems on a large-scale basis. Also, the overall costs of installing in-line baggage screening systems at each airport are unknown, the availability of future federal funding is uncertain, and perspectives differ regarding the appropriate role of the federal government, airport operators, and air carriers in funding these systems. Moreover, TSA has not conducted a systematic, prospective analysis to determine at which airports it could achieve long-term savings and enhance efficiencies and security by installing in-line systems or, where in-line systems may not be economically justified, by making greater use of stand-alone EDS systems rather than relying on the labor-intensive and less efficient ETD screening process. However, at nine airports where TSA has agreed to help fund the installation of in-line baggage screening systems, TSA conducted a retrospective cost-benefit analysis which showed that these in-line systems could yield significant savings for the federal government. TSA further estimated that it could recover its initial investment in the in-line systems at these airports in a little over 1 year.
Beginning in the late 1990s, CMS took steps to broaden the mechanisms in place intended to help ensure that nursing home residents receive quality care. To augment the periodic assessment of homes’ compliance with federal quality requirements, CMS contracted for the development of QMs and tasked QIOs with providing assistance to homes to improve quality. CMS used QMs both to provide the public with information on nursing home quality of care and to help evaluate QIO efforts to address quality-of- care issues, such as pressure ulcers. During the 7th SOW, organizations other than QIOs were also working with nursing homes to improve quality. Two indicators used by CMS to assess the quality of care that nursing homes provide to residents are (1) deficiencies identified during standard surveys and complaint investigations and (2) QMs. Both indicators are publicly reported on CMS’s Nursing Home Compare Web site. Under contract with CMS, state agencies conduct standard surveys to determine whether the care and services provided by nursing homes meet the assessed needs of residents and whether nursing homes are in compliance with federal quality standards. These standards include preventing avoidable pressure ulcers; avoiding unnecessary restraints, either physical or chemical; and averting a decline in a resident’s ability to perform activities of daily living, such as toileting or walking. During a standard survey, a team that includes registered nurses spends several days at a home reviewing the quality of care provided to a sample of residents. States are also required to investigate complaints filed against nursing homes by residents, families, and others. Complaint investigations are less comprehensive than standard surveys because they generally target specific allegations raised by the complainants. Any deficiencies identified during standard surveys or complaint investigations are classified according to the number of residents potentially or actually affected (isolated, pattern, or widespread) and their severity (potential for minimal harm, potential for more than minimal harm, actual harm, or immediate jeopardy). Deficiencies cited at the actual harm and immediate jeopardy level are considered serious and could trigger enforcement actions such as civil money penalties. We have previously reported on the considerable interstate variation in the proportion of homes cited for serious care problems, which ranged during fiscal year 2005 from 4 percent of Florida’s 691 homes to 44 percent of Connecticut’s 247 homes. We reported that such variability suggests inconsistency in states’ interpretation and application of federal regulations; in addition, both we and CMS have found that state surveyors do not identify all serious deficiencies. QMs are relatively new indicators of nursing home quality. Although survey deficiencies have been publicly reported since 1998, CMS did not begin posting QMs on its Nursing Home Compare Web site until November 2002. QMs are derived from resident assessments known as the MDS that nursing homes routinely collect on all residents at specified intervals. Conducted by nursing home staff, MDS assessments cover 17 areas, such as skin conditions, pain, and physical functioning. In developing QMs, CMS recognized that any publicly reported indicators must pass a rigorous standard for validity and reliability. In October 2002, we reported that national implementation of QMs was premature because of validity and reliability concerns. Valid QMs would distinguish between good and poor care provided by nursing homes; reliable QMs would do so consistently. One of our main concerns about publicly reporting QMs was that the QM scores might be influenced by other factors, such as residents’ health status. As a result, the specification of appropriate risk adjustment was a key requirement for the validity of any QMs. Risk adjustment is important because it provides consumers with an “apples-to-apples” comparison of nursing homes by taking into consideration the characteristics of individual residents and adjusting the QM scores accordingly. For example, a home with a disproportionate number of residents who are bedfast or who present a challenge for maintaining an adequate level of nutrition—factors that contribute to the development of pressure ulcers—may have a higher pressure ulcer score. Adjusting a home’s QM score to fairly represent to what extent a home does or does not admit such residents is important for consumers who wish to compare one home to another. Appendix II lists the 10 QMs initially adopted and publicly reported by CMS—6 applicable to residents with chronic care problems (long-stay residents) and 4 applicable to residents with post- acute-care needs (short-stay residents). MDS data are self-reported by nursing homes, and ensuring their accuracy is critical for establishing resident care plans, setting nursing home payments, and publicly reporting QM scores. In February 2002, we concluded that CMS efforts to ensure the accuracy of MDS data, which are used to calculate the QMs, were inadequate because the agency relied too much on off-site review activities by its contractor and planned to conduct on-site reviews in only 10 percent of its data accuracy assessments, representing fewer than 200 of the nation’s then approximately 17,000 nursing homes. Although we recommended that CMS reorient its review program to complement ongoing state MDS accuracy efforts as a more effective and efficient way to ensure MDS data accuracy, CMS disagreed and continued to emphasize off-site reviews. Over the past 24 years, the QIO program has evolved from a focus on quality assurance in the acute care setting to quality improvement in a broader mix of settings, including physician offices, home health agencies, and nursing homes. Established by the Peer Review Improvement Act of 1982 and originally known as Peer Review Organizations (PRO), QIOs initially focused on ensuring minimum standards by conducting retrospective hospital-based utilization reviews that looked for inappropriate or unnecessary Medicare services. According to the 2006 IOM report, as it became apparent that standards of care themselves required attention, QIOs gradually shifted from retrospective case reviews to collaboration with providers to improve the overall delivery of care—a shift consistent with transformational goals set by CMS’s Office of Clinical Standards and Quality, which oversees the QIO program. In contrast to enforcing standards, quality improvement tries to ensure that organizations have effective processes for continually measuring and improving quality. The goal of quality improvement is to close the gap between an organization’s current performance and its ideal performance, which is defined by either evidence-based research or best practices demonstrated in high-performing organizations. According to the quality improvement literature, successful quality improvement requires a commitment on the part of an organization’s leadership and active involvement of the staff. The 2006 IOM report notes that QIOs rely on various mechanisms to promote quality improvement, including one-on- one consulting and collaboratives. While the former provides direct and specialized attention, the latter relies on workshops or meetings that offer opportunities for providers to share experiences and best practices. Quality improvement often relies on the involvement of early adopters of best practices—providers who are highly regarded as leaders and can help convince others to change—for the diffusion of best practices. Key tools for quality improvement include (1) root cause analysis, a technique used to identify the conditions that lead to an undesired outcome; (2) instruction on how to collect, aggregate, and interpret data; and (3) guidance on bringing about, sustaining, and diffusing internal system redesign and process changes, particularly those related to use of information technology for quality improvement. Quality improvement experts also emphasize the importance of protecting the confidentiality of provider information, not only to protect the privacy of personal health information but also to encourage providers to evaluate their peers honestly and to prevent the damage to providers’ reputations that might occur through the release of erroneous information. Section 1160 of the Social Security Act provides that information collected by QIOs during the performance of their contract with CMS must be kept confidential and may not be disclosed except in specific instances; it provides the Secretary of HHS with some discretion to determine instances under which QIO information may be disclosed. The regulations implementing the statute limit the circumstances under which confidential information obtained during QIO quality review studies, including the identities of the participants of those studies, may be disclosed by the QIO. During the 7th SOW, QIOs submitted a list of nursing home participants to CMS as a contract deliverable. During the 7th SOW, CMS awarded a total of $117 million to QIOs to improve the quality of care in nursing homes in all 50 states, the District of Columbia, and the territories. The performance-based contracts for QIO assistance to nursing homes delineated broad expectations regarding QIO assistance to nursing homes, provided deadlines for completing four contract deliverables, and laid out criteria for evaluating QIO performance. For contracting purposes, the QIOs were divided into three groups with staggered contract cycles. The four contract deliverables, however, were all due on the same dates, irrespective of the different contract cycles. The contracts also required QIOs to work with a QIO support contractor tasked to provide guidelines for recruiting and selecting nursing homes as intensive participants, train QIOs in standard models of quality improvement assistance, and provide tools and educational materials, as well as individualized consultation if needed, to help QIOs meet contractual requirements. QIOs and nursing homes were also involved in other quality improvement special studies with budgets separate from the QIO contracts for the 7th SOW. These studies varied greatly in terms of length, the clinical issue(s) covered, the number of QIOs involved, and the characteristics of the nursing homes that participated. Figure 1 shows the 7th SOW contract cycles, deliverables for the nursing home component, and the duration of the special studies. Contract funding. The $117 million awarded to QIOs to improve the quality of care in nursing homes during the 7th SOW included (1) $106 million awarded to provide statewide and intensive assistance to homes, (2) $5.6 million awarded to selected QIOs to conduct eight special studies focused on nursing home care, and (3) $5.3 million awarded to the QIO that served as the support contractor for the nursing home component. CMS allocated a specific amount for each component of the contracts, but allowed QIOs to move funds among certain components. Just over half of the 51 QIOs did not spend all of the funds allocated to the nursing home component, but on average the QIOs overspent the budget for the nursing home work by 3 percent. Contract requirements for quality improvement activities. Per the contracts for the 7th SOW, QIOs were required to provide (1) all Medicare- and Medicaid-certified homes with information about systems-based approaches to improving patient care and clinical outcomes, and (2) intensive assistance to a subset of homes in each state. The contracts directed QIOs working in states with 100 or more nursing homes to target 10 to 15 percent of the homes for intensive assistance. Figure 2 illustrates that QIOs provided two levels of assistance—statewide and intensive— and that homes’ participation was either nonintensive or intensive. Intensive participants received both statewide and intensive assistance. Selection of intensive participants from among the nursing homes that volunteered was at the discretion of each QIO, but the SOW required the QIO support contractor (the Rhode Island QIO) to provide guidelines and criteria for QIOs to use in determining which homes to select. Participation in the program was voluntary, and QIOs were prohibited from releasing the names of participating nursing homes except as permitted by statute and regulation. Under the contracts, the quality improvement assistance provided by QIOs focused on areas related to eight chronic care and post-acute-care QMs publicly reported on the CMS Nursing Home Compare Web site. QIOs were required to consult with relevant stakeholders and select from three to five of the eight QMs on which QIOs’ quality improvement efforts would be evaluated (see table 1). Intensive participant homes were also required to select one or more QMs on which to work with the QIO. Although they could select one QM, they were encouraged to select more than one. To improve QM scores, QIOs were expected to develop and implement quality improvement projects focused on care processes known to improve patient outcomes in a manner that utilized resources efficiently and reduced burdens on providers. The QIO support contractor developed a model for QIOs to facilitate systems change in nursing homes. This model emphasized the importance of QIOs’ statewide activities to form and maintain partnerships, conduct workshops and seminars, and disseminate information on interventions to improve quality. For intensive participants, the model emphasized conducting one-on-one quality improvement assistance as well as conferences and small group meetings. According to contract language, QIOs were expected to coordinate their projects with other stakeholders that were working on similar improvement efforts or were interested in teaming with the QIO. But ultimately, each QIO determined for itself the type, level, duration, and intensity of support it would offer to nursing homes. Evaluation of QIO contract performance. CMS evaluated QIOs’ performance on the nursing home component of the contract using nursing home provider satisfaction with the QIO, QM improvement among intensive participants, and QM improvement statewide (see fig. 3). Nursing home provider satisfaction was assessed by surveying all intensive participants and a sample of nonintensive participants around the 28th month of each 36-month contract. CMS expected at least 80 percent of respondents to report that they were either satisfied or very satisfied. QIOs were also expected to achieve an 8 percent improvement in QM scores among both intensive participants and homes statewide. The term improvement was defined mathematically to mean the relative change in the QM score from when it was measured at baseline to when it was remeasured. The statewide improvement score included the QM improvement scores for intensive participants averaged with those of nonintensive participants. CMS established two scoring thresholds for the contracts that encompassed scores from all components of the SOW. If a QIO scored above the first threshold it was eligible for a noncompetitive contract renewal; if it scored below that threshold, it was eligible for a competitive renewal only upon providing information pertinent to its performance to a CMS-wide panel that decided whether to allow the QIO to bid again for another QIO contract. CMS contract monitoring. CMS formally evaluated each QIO at months 9 and 18 of the 7th SOW. If CMS found that a QIO failed to meet contract deliverables or appeared to be in danger of failing to meet contract goals, it could require the QIO to make a performance improvement plan or corrective plan of action to address any barriers to the QIOs successfully fulfilling contract requirements. In addition, CMS reviewed materials such as QIOs internal quality control plans, which were intended to help QIOs monitor their own progress and to document any project changes made to improve their performance. The QIO program operated in the context of other quality improvement initiatives sponsored by federal and state governments and nursing home trade associations. As stated earlier, CMS funded a number of special nursing home studies involving subsets of the QIOs and nursing homes, which addressed a variety of clinical quality-of-care issues and which are summarized in figure 1. Under CMS’s Special Focus Facility program, state survey agencies were required to conduct enhanced monitoring of nursing homes with histories of providing poor care. During the 7th SOW, CMS revised the method for selecting homes for the Special Focus Facility program to ensure that the homes performing most poorly were included; increased the minimum number of homes that must be included, from a minimum of two per state to a minimum of up to six, depending on the number of homes in the state; and strengthened enforcement for those nursing homes with an ongoing pattern of substandard care. In addition, concurrent with the 7th SOW, at least eight states had programs that provided quality assurance and technical assistance to nursing homes in their states. These programs varied in terms of whether they were voluntary or mandatory, which homes received assistance, the focus and frequency of the assistance provided, and the number and type of staff employed. In addition to government-operated quality improvement initiatives, three long-term care professional associations joined together in July 2002 to implement the Quality First Initiative. This initiative was based on a publicly articulated pledge on the part of the long-term care profession to establish an environment of continuous quality improvement, openness, and leadership in participating homes. Although QIOs generally had a choice of homes to select for intensive assistance because more homes volunteered than CMS expected QIOs to assist, QIOs typically did not target the low-performing homes that volunteered. Most QIOs reported in our Web-based survey that they did not have difficulty recruiting homes, and their primary consideration in selecting homes from the pool of volunteers was that the homes be committed to working with the QIOs. In the 7th SOW, CMS did not specify recruitment and selection criteria for intensive participants, leaving the development of guidelines to the QIO support contractor, which encouraged QIOs to select homes that seemed committed to quality improvement and to exclude homes with a high number of survey deficiencies, high management turnover, or QM scores that were too good to improve significantly. Our analysis of state survey data showed that, nationwide, intensive participants were less likely to be low-performing than other homes in their state in terms of the number, scope, and severity of deficiencies for which they were cited in standard surveys from 1999 through 2002. This result may reflect the nature of the homes that volunteered for assistance, the QIOs’ selection criteria, or a combination of the two. The stakeholders we interviewed—including officials of state survey agencies and nursing home trade associations—generally believed QIOs’ resources should be targeted to low-performing homes. Most QIOs had a choice of which nursing homes to assist intensively, as more homes volunteered than the QIOs could receive credit for serving under the terms of their contracts. Of the 38 QIOs in states with 100 or more homes, which were expected to work intensively with 10 to 15 percent of the homes, 30 reported in our Web-based survey that more than 15 percent of homes expressed interest in intensive assistance, and 8 reported that more than 30 percent of homes expressed interest. Most QIOs selected about as many intensive participants as needed to get the maximum weight for the intensive participant element of their contract evaluation score. Nationwide, the intensive participant group included just under 15 percent (2,471) of the 16,552 homes identified by CMS at the beginning of the 7th SOW. Most QIOs—82 percent of the 51 that responded to our survey—reported that it was not difficult to recruit the target number of homes for intensive assistance; the remainder reported that it was difficult (12 percent) or very difficult (4 percent) to recruit enough volunteers. Among the QIOs we interviewed, personnel at two that reported difficulties recruiting homes cited homes’ lack of familiarity with QIOs as a barrier. Personnel at one of these two QIOs commented that the QIO’s first task was to build trust among homes and address confusion about its role, as some homes thought the QIO was a regulatory authority charged with investigating complaints and citing homes for deficiencies. QIOs that responded to our Web-based survey almost uniformly cited homes’ commitment to working with them as a key consideration in choosing among the homes that volunteered to be intensive participants. QIOs had wide latitude in choosing among homes because CMS did not specify the characteristics of the homes they should recruit or select, leaving it to the QIO support contractor to provide voluntary guidelines. The QIO support contractor developed guidelines based on input from a variety of sources, including QIOs that worked with nursing homes during the 6th SOW. Issued at the beginning of the 7th SOW, the guidelines emphasized the important role the selected homes would play in the QIOs’ contract performance and encouraged QIOs to select homes that demonstrated a willingness and ability to commit time and resources to quality improvement. The QIO support contractor also encouraged QIOs to exclude homes with a high number of survey deficiencies, high management turnover, and QM scores that were too good to improve significantly. With respect to homes’ survey histories, the QIO support contractor reasoned that homes with a high number of deficiencies might be more focused on improving their survey results than on committing time and resources to quality improvement projects. For example, the care areas in which a home was cited for deficiencies might not correspond with any of the eight QMs to which CMS limited the QIOs’ quality improvement activities (see table 1). In fact, the quality of care area in which homes were most frequently cited for serious deficiencies in surveys in 2006 was the provision of supervision and devices to prevent accidents, which does not have a corresponding QM. Consistent with the guidelines, 76 percent of the 41 QIOs that reported in our Web-based survey their considerations in selecting homes for the intensive participant group ranked homes’ commitment as their primary consideration. Nearly all QIOs ranked commitment among their top three considerations (see fig. 4). Homes’ QM scores were also an important consideration for QIOs. QIOs were particularly interested in including homes that had poor QM scores in areas where the QIO planned to focus or in assembling a group of homes that represented a mix of QM scores. With respect to homes’ overall QM scores, the QIOs that responded to our survey were more likely to seek homes with moderate overall scores than homes with poor or good overall scores. Similarly, personnel at most QIOs we contacted gave serious consideration to homes’ QM scores, looking for homes that appeared to need help and could demonstrate improvement. For example, personnel at one QIO said that they tended to select homes whose QM scores were worse than the statewide average; personnel at another QIO said that this QIO selected homes with scores it thought could be improved, eliminating homes with either very high or very low scores. Personnel at one QIO acknowledged that some QIOs might “cherry pick” homes in this way in order to satisfy CMS contract requirements but argued that it was not possible for QIOs to predict which homes would improve the most. QIOs generally gave less consideration to the number of deficiencies homes had on state surveys than to their QM scores. However, the 17 QIOs that ranked survey deficiencies among their top three considerations in our survey were more likely to seek homes with deficiencies in areas where they planned to focus or homes with an overall low level (number and severity) of survey deficiencies than homes with an overall high level. Moreover, of the 33 QIOs that reported in our survey systematically excluding some of the homes that volunteered from the intensive participant group, nearly one-quarter (8) excluded homes with a high number of survey deficiencies. None excluded homes with a low number of survey deficiencies. Personnel at the QIOs we interviewed offered several reasons for excluding homes with a high number of survey deficiencies from the intensive participant group. Personnel at several QIOs concurred with the QIO support contractor that such homes were likely to be too consumed with correcting survey issues to focus on quality improvement. Personnel at one QIO suggested that the kind of assistance very poor-performing homes need⎯help improving the basic underlying structures of operation⎯was not the kind the QIO offered. Personnel at some QIOs said they considered not just the level of deficiencies for which homes were cited on recent surveys but the level over multiple years or the specific categories of deficiencies. For example, personnel at one QIO said that although the QIO excluded homes with long-standing histories of poor performance, it actively recruited homes that had performed poorly only on recent surveys. Personnel at another QIO stated that their concern was to avoid homes with competing priorities. This QIO sought to include homes with deficiencies in the areas it planned to address but to exclude homes with deficiencies in other areas on the assumption that these homes would not benefit from the assistance it planned to offer. Personnel we interviewed at two QIOs said that they worked with some extremely poor-performing homes but did not include them on the official list of intensive participants submitted to CMS; personnel at one of these QIOs explained that they did not want to be held responsible if these homes were unable to improve. Our analysis of homes’ state survey histories from 1999 through 2002 indicates that QIOs did not target intensive assistance to homes that had performed poorly in state surveys. Nationwide, the homes in the intensive participant group were less likely than other homes in their state to be low-performing in terms of the number, scope, and severity of deficiencies for which they were cited in surveys during that time frame. As illustrated in figure 5, the intensive participant group included proportionately more homes in the middle of the performance spectrum and proportionately fewer at either end. Although our analysis focused on survey deficiencies rather than QMs, this result is generally consistent with the results of our Web-based survey concerning QIOs’ use of QM scores as selection criteria, which showed that QIOs were more likely to select homes with moderate overall scores than homes with poor or good overall scores and to seek a mix of performance levels among homes in the group. However, not knowing the composition of the pool of homes that volunteered for assistance, we cannot determine whether the composition of the intensive participant group⎯in particular, the disproportionately low number of low-performing homes in the group⎯was a function of which homes volunteered, which homes the QIOs selected from among the volunteers, or a combination of both factors. On a state-by-state basis, none of the QIOs targeted assistance to low- performing homes by including proportionately more such homes in the intensive participant group. Most QIOs (33 of 51) worked intensively with homes that were generally representative of the range of homes in their state in terms of performance on state surveys from 1999 through 2002. In these states, there was no significant difference in the proportion of high-, moderately, or low-performing homes among intensive participants compared with nonintensive participants. However, 18 QIOs worked intensively with a group that differed significantly from other homes in the state: 8 of these QIOs worked with proportionately fewer low-performing homes, 5 worked with proportionately more moderately performing homes, and 9 worked with proportionately fewer high-performing homes. Stakeholders we interviewed who expressed an opinion about the homes QIOs should target for intensive assistance—11 of the 16 we interviewed— almost uniformly said that the QIOs should concentrate on low-performing homes. Survey officials in one state suggested that QIOs should use state survey data to assess homes’ need for assistance because these data are often more current than QM data. In their emphasis on low-performing homes, stakeholders echoed the views expressed in the 2006 IOM report, which recommended that QIOs give priority for assistance to providers, including nursing homes, that most need improvement. Other stakeholder suggestions regarding the homes QIOs should target are listed in table 2. Because the QIOs were required to protect the confidentiality of QIO information about nursing homes that agreed to work with them, stakeholders were generally not informed which homes were receiving intensive assistance. One exception was in Iowa, where the QIO obtained consent from the selected homes to reveal their identities. Several stakeholders said that low-performing homes can improve with assistance. However, one suggested that QIOs might have to adapt their approach⎯for example, by streamlining their training⎯to avoid overburdening homes that are struggling with competing demands. Another agreed that low-performing homes can benefit from working with a QIO but added that real improvements in the quality of care in these homes would require attention to staffing, turnover, pay, and recognition for staff. The results of one special study funded by CMS during the time frame of the 7th SOW supported stakeholders’ contention that low- performing homes can improve, although the improvements documented in these homes cannot be definitively attributed to the QIOs. In this study, known as the Collaborative Focus Facility project, 17 QIOs worked intensively with one to five low-performing homes identified in consultation with the state survey agency. According to a QIO assessment of the project, the participating homes showed improvement in areas related to the assistance provided by the QIO in terms of both the number of serious state survey deficiencies for which they were cited and their QM scores. CMS officials pointed out that these improvements were hard-won: one-third of the homes that were asked to participate in the Collaborative Focus Facility project refused, and those that did participate required more effort and resources from the QIOs to improve than did other homes assisted by the QIOs. Overall, CMS has specifically directed only a small share of QIO resources to low-performing homes. In the current contracts (the 8th SOW), CMS required QIOs to provide intensive assistance to some “persistently poor- performing homes” identified in consultation with each state survey agency. However, the number of such homes that the QIOs must serve is small⎯ranging from one to three, depending on the number of nursing homes in the state⎯and accounts for less than 10 percent of the homes the QIOs are expected to assist intensively. Less than 17 percent of the 144 persistently poor-performing homes the QIOs selected in consultation with state survey agencies to assist in the 8th SOW were also special focus facilities in 2005 or 2006. QIOs and stakeholders tended to disagree about whether participation in the program should remain voluntary for all homes. QIO personnel we interviewed who expressed an opinion generally supported voluntary participation on the theory that homes that were forced to participate would probably be less engaged and put forth only minimal effort. Personnel at some QIOs that opposed mandatory participation suggested that creating incentives for homes to improve their quality of care⎯for example, through pay for performance⎯would increase homes’ interest in working with the QIO. In contrast, most of the state survey agency and trade association officials we interviewed who expressed an opinion about the voluntary nature of the QIO program said that some homes should be required to work with the QIO. Officials at one state survey agency pointed out that the low-performing homes that really need assistance rarely seek it; these officials believed that working with the QIO should be mandatory for low-performing homes and voluntary for moderately to high- performing homes. Another state survey agency official recommended that 25 to 40 percent of the homes assisted intensively be chosen from among the lower-performing homes in the state and required to work with the QIO. The 7th SOW contracts allowed QIOs flexibility in the QMs they focused on and the interventions they used, and while the majority of QIOs selected the same QMs and most used the same interventions to assist homes statewide, the interventions for intensive participants and staffing to accomplish program goals varied. Most QIOs and intensive participants worked on the chronic pain and pressure ulcer QMs, but these were not the QMs that some intensive participants believed matched their greatest quality-of-care challenges. To assist all homes statewide, QIOs generally relied on conferences and the distribution of educational materials. The top three interventions for intensive participants included on-site visits (87 percent), followed by conferences (57 percent), and small group meetings (48 percent). According to nursing home staff we interviewed, turnover and experience levels of the QIO personnel that provided them assistance affected their satisfaction with the program and the extent of their quality improvements. Under the terms of the contracts, both QIOs and intensive participants could select QMs to focus on, but most chose to work on two of the same QMs. While nearly all QIOs chose to work statewide on chronic pain and pressure ulcers, they differed on their selection of additional QMs (see fig. 6). QIO personnel we interviewed told us they based the choice of QMs for their statewide work on input from stakeholders and nursing homes or QM data. For example, some stakeholders told us that specific QMs selected addressed existing long-term care challenges and were ones on which homes in the state ranked below the national average. Personnel from two QIOs said they selected QMs based on input from homes in their state about which QMs the homes were interested in working on, and personnel from several QIOs stated that they selected QMs on which their homes could improve. Personnel from one QIO specifically mentioned that they selected QMs related to the quality of life for nursing home residents. Most intensive participants worked on a subset of the QMs selected by their QIO—chronic pain and pressure ulcers (see fig. 6). The degree to which intensive participants knew they had a choice of QMs was unclear. Of the 14 intensive participants we interviewed that commented on whether they had a choice, 9 said that they did. Staff from these homes generally reported having selected QMs related to clinical issues on which they could improve. However, the remaining 5 homes indicated that their QIO selected the QMs on which they received assistance. Most of these 5 homes’ staff reported that they would have preferred to work on different QMs from the list of eight that are publicly reported on the CMS Nursing Home Compare Web site or other clinical issues that reflect their greatest quality-of-care challenges. The terms of the QIO contract with CMS allowed QIOs to determine the kinds of quality improvement interventions they offered to homes, and those selected by QIOs were consistent with an approach recommended by the QIO support contractor: QIOs generally relied most on conferences and the distribution of educational materials to assist homes statewide and on on-site visits to assist intensive participants. However, there was a greater variety of interventions frequently relied on to assist intensive participants. In general, QIOs reported that the interventions they relied on most were also the most effective for improving the quality of resident care. Almost three-quarters of the QIOs included conferences among the two interventions they relied on most to provide quality improvement assistance to homes statewide (see fig. 7). These QIOs held an average of nine conferences over the course of the 7th SOW, typically in various cities throughout the state to accommodate homes from different regions. Sixty- eight percent of these QIOs reported that more than half the homes in their state sent staff to least one conference, and 16 percent of QIOs reported that all or nearly all homes did so. QIO personnel reported holding conferences to educate homes on quality improvement, discuss the relationship between MDS assessments and the QMs, and provide QM- specific clinical information or best practices. Some conferences included presentations by state or national experts. Almost three-quarters of QIOs also ranked the distribution of educational materials by mail, fax, or e-mail among their top two statewide interventions. Thirty-two percent of these QIOs sent materials four or fewer times per year, whereas 27 percent sent materials 12 or more times per year to all or nearly all homes in the state. For the QIOs we interviewed, these materials included newsletters, QM-specific tools or clinical information related to the QMs, and QM data progress reports for the home or state, overall. Almost one-third of the QIOs (31 percent) reported that the type or intensity of interventions they used to assist homes statewide changed over the course of the 7th SOW. For example, two QIOs reported that they concentrated much of their statewide efforts into the first half of the 3-year period; one QIO specifically reported doing so in the interest of ensuring that any improvements in QMs were reflected in its evaluation scores, which, as specified by the contract, were calculated near the mid- point of the contract cycle. In contrast, five other QIOs reported that they increased the intensity of their statewide work over time, in some cases concentrating on homes whose performance was lagging. For the 8th SOW, CMS has focused resources on assistance to intensive participants by eliminating expectations for improvements in QMs statewide. However, the contracts still contain statewide elements, including a requirement to promote QM target-setting. Fifty-one percent of QIOs ranked on-site visits as their most relied on intervention with intensive participants and 87 percent ranked it among their top three interventions (see fig. 8). Both the number of visits and the time spent at sites varied considerably. The median number of visits was 5 but ranged from 1 to 20. Sixty-eight percent of QIOs that included on-site visits among their top three interventions spent an average of 1 to 2 hours at sites each time they visited, while 20 percent spent 3 to 4 hours. QIOs that ranked on-site visits as their number one intervention made more and longer visits to intensive participants than did QIOs that ranked them lower. When surveyed about a typical on-site visit, the majority of QIO respondents reported that they generally reviewed the homes’ QM data, provided education or best practices, or both. Approximately one-third of QIOs that conducted site visits indicated that they had discussions with the home about their systems or processes for care, homework assignments, or quality improvement activities. Some QIOs (26 percent) reported that they conducted team-building exercises with the staff when on site. QIOs varied in the interventions they used in addition to on-site visits, with conferences, small group meetings emphasizing peer-to-peer learning, and telephone calls being the three others most commonly used. QIOs that included conferences among their three most relied on interventions typically held between 3 and 10 during the 7th SOW, but as with site visits, some variation existed. After conferences, QIOs were most likely to rely on small group meetings and telephone calls with individual homes. Nearly half of the QIOs ranked these two interventions among their three most relied on, but few ranked them highest. The number of homes that attended small group meetings varied. An average of 6 to 10 homes was most common, but one-fifth of QIOs reported having an average of 20 or more homes represented at each meeting. As for telephone calls, the vast majority of QIOs (92 percent) that ranked these calls among their three most relied on interventions called all or nearly all of their intensive participants, typically on a monthly basis. Our interviews with QIOs and intensive participant homes suggested that the small group meetings they held generally followed a similar format, while telephone calls were used for a variety of purposes. For example, personnel from several QIOs and intensive participant homes told us that their small group meetings generally included a formal presentation on the QMs or related best practices, as well as a time for less formal information sharing and peer-to-peer learning among the attendees. Participants shared stories about their successes and challenges conducting quality improvement. Personnel from a number of QIOs told us they used telephone calls to follow up after visits or meetings, discuss the homes’ progress on quality improvement, and to decide on next steps. Almost two-thirds of QIOs indicated that the type or intensity of interventions for intensive participants varied over time. Of these QIOs, 36 percent reduced the intensity of their interventions (substituting small group meetings or telephone calls for on-site visits), while 33 percent did the reverse (in some cases increasing the frequency of on-site visits or substituting small group meetings for conferences to increase participation). For example, personnel from a few QIOs told us that while they initially relied on on-site visits to begin the quality improvement process, they came to rely more on telephone calls or on small group meetings where intensive participants could share their success stories or ways to overcome barriers to quality improvement. Seventy-nine percent of QIOs surveyed varied their interventions based on the needs of intensive participants. Thus, personnel from three QIOs told us they realized that some homes did not need frequent on-site visits, while others needed more. The two specific needs that QIOs cited most as having precipitated changes in their interventions were nursing home staffing changes and turnover (23 percent) and poorer performance by some homes relative to others (15 percent). A few QIOs also noted that interventions varied by the preferences or levels of readiness and participation of the homes with which they were working. Most QIOs we surveyed deemed conferences the most effective statewide intervention and on-site visits the most effective intensive intervention; intensive participant homes we interviewed also found these interventions valuable. For homes statewide, most QIOs (54 percent) reported that conferences were their most effective intervention, followed by distribution of educational materials and on-site visits. Of the one-quarter of QIOs that reported they would change their statewide approach, the largest proportion (46 percent) would make conferences their primary intervention. Staff from several nursing homes we interviewed tended to concur that conferences were valuable aspects of the program because conferences included expert presenters, energized or motivated attendees, and were free. For intensive participants, most QIOs (63 percent) deemed on-site visits their most effective intervention, followed by conferences and small group meetings. Of the 15 QIOs that said they would change their approach with these homes, most (60 percent) would make on-site visits their primary intervention, while fewer would rely on small group meetings, conferences, and other interventions. One QIO began conducting on-site visits and small group meetings when it became apparent that telephone calls were less productive than had been anticipated because of the difficulty of getting the right staff on the telephone at the right time, the lack of speaker phones at many homes, and the lack of staff engagement on the phone. Staff from a number of nursing homes we interviewed agreed that visits by QIO personnel were helpful. Some homes indicated that having someone from the QIO visit the home maximized the number of staff that could take advantage of the quality improvement training offered. Furthermore, the on-site visits were motivating and kept staff on track with quality improvement efforts. Regarding small group meetings, staff we interviewed from a few homes stated that meeting with staff from other homes helped validate their own efforts or facilitated the sharing of materials and experiences. Staff from one nursing home specifically reported that they were disappointed not to have formally participated in small group meetings with other facilities in the state. Homes also found particular types of assistance less helpful. Some homes’ staff reported that they did not feel they had the time or the staff necessary to complete some of the homework assignments expected of them, such as conducting chart reviews. Staff at some homes stated that the QIO provided quality improvement information with which they were already familiar. Our interviews with nursing home staff who worked intensively with the QIOs indicated that homes’ satisfaction with the program was influenced by the training and experience of the primary QIO personnel who served as their principal contact with the QIOs, as well as by turnover among these individuals during the course of the 7th SOW. When a home’s principal contact with the QIO was a nurse or someone with long-term care or quality improvement experience, nursing home staff tended to report that this person possessed the knowledge and skills necessary to help them improve the quality of care in their home. Interviewees also spoke appreciatively of QIO personnel who were knowledgeable, motivating, and kept them on track with their efforts. However, when the QIO principal contact lacked these qualifications or characteristics, he or she was perceived as unable to effectively address clinical topics with staff. Staff at one home said explicitly that working with an experienced nurse, instead of a social worker who seemed to lack knowledge of long-term care, would have led to greater improvement in clinical quality. The extent to which QIO primary personnel had the training or experience that homes considered important varied. More than half (58 percent) of the primary QIO personnel who worked with nursing homes during the 7th SOW were trained in nursing, and 42 percent held an advanced degree. Nationwide, 27 percent of the primary personnel who worked with nursing homes had less than 1 year of long-term care experience, while 30 percent had more than 10 years of such experience. Just over half of primary QIO personnel (54 percent) working with nursing homes had 4 or fewer years of quality improvement experience. Nine percent of QIO personnel had more than 10 years’ experience in both long-term care and quality improvement. Few of the personnel working with nursing homes during the 7th SOW gained any of their experience working for the QIO during the 6th SOW because there was little overlap in personnel across the two periods. Our interviews with intensive participants suggested that turnover among primary QIO personnel lowered nursing homes’ satisfaction with the program. Our survey revealed that turnover was particularly high at some QIOs. At 24 QIOs, 25 percent or more of primary personnel who worked with nursing homes did so for less than half of the 36-month contract, and at 6 QIOs, the proportion was 50 percent or more. When a nursing home’s principal contact with a QIO changed frequently, nursing home staff we interviewed reported that they received inconsistent assistance that was disruptive to their efforts to improve quality of care. For example, one nursing home we visited had four different principal contacts over the course of the 7th SOW and found this to be frustrating because, just as they were establishing a relationship with a contact, the contact would leave. Staff at another home complained that their interaction with QIO primary personnel turned out not to be the learning experience that the staff thought it would be. Staffing levels for the nursing home component also varied among QIOs. As would be expected, given the wide variation in the number of nursing homes per state, the number of full-time-equivalent (FTE) staff working with nursing homes varied across the QIOs, ranging from 0.50 to 12. However, the ratio of QIO staff FTEs to intensive participant homes also showed significant variation. On average, the ratio was about 1 to 14; but for at least 9 QIOs, the ratio of staff FTEs to homes was 1 to 10 or fewer, and for at least 8 QIOs, the ratio was 1 to 18 or more. Although the QIOs’ impact on the quality of nursing home care cannot be determined from available data, staff we interviewed at most nursing homes attributed some improvements in the quality of resident care to their work with the QIOs. Nursing homes’ QM scores generally improved enough for the QIOs to surpass by a wide margin the modest contract performance targets set by CMS; however, the overall impact of the QIOs on the quality of nursing home care cannot be determined from these data because of the shortcomings of the QMs as measures of nursing home quality and because confounding factors make it difficult to attribute quality improvements solely to the QIOs. Multiple long-term care professionals we interviewed indicated that QMs should not be used in isolation to measure quality improvement, but combined with other indicators, such as state survey data. Moreover, the effectiveness of the individual interventions QIOs used to assist homes also cannot be evaluated with the available data. CMS planned to enhance evaluation of the program during the 8th SOW, but a 2005 determination by HHS’s Office of General Counsel that the QIO program regulations prohibit QIOs from providing to CMS the identities of the homes they are assisting has hampered the agency’s efforts to collect the necessary data. Although the impact of the QIOs on the quality of nursing home care is not known, over two-thirds of the 32 nursing homes we interviewed attributed some improvements in care to their work with the QIOs. Although all of the QIOs met the modest targets CMS set for QM improvement among homes both statewide and in the intensive participant group, the impact of the QIOs on the quality of nursing home care cannot be determined because of the limitations of the QMs and because improvements cannot be definitively attributed to the QIOs. The effectiveness of the specific interventions used by the QIOs to assist homes also cannot be evaluated with the available data. All QIOs met the CMS performance targets for the nursing home component of the 7th SOW. In addition to receiving an overall passing score for this component, nearly all QIOs surpassed expectations for each of the three elements that contributed to the overall score: provider satisfaction, improvement in QM scores among intensive participants, and improvement in QM scores among homes statewide. In fact, about two- thirds of the QIOs achieved at least five times the expected 8 percent improvement among intensive participants, and nearly half achieved at least twice the expected 8 percent improvement statewide. CMS officials stated that the targets set for the nursing home component of the contract were purposely modest. Because the 7th SOW marked the first time all QIOs were required to work with nursing homes on quality improvement, and little data existed to predict how much improvement could be expected, CMS deliberately designed performance criteria to limit QIOs’ chances of failing. For example, expectations for improvements in QM scores were set no higher for intensive participants than for homes statewide. In addition, CMS modified the evaluation plan so that if an intensive participant worked on more than one QM, the QM that improved least was dropped before the home’s average improvement was calculated. CMS officials told us that, based in large part on QIOs’ performance in the 7th SOW, the agency raised its expectations for the 8th SOW. For example, QIOs are required to work with most intensive participants on four specified QMs and to achieve an improvement rate of 15 to 60 percent, depending on the QM and the homes’ baseline scores. In addition, CMS will no longer drop the QM that improved least when calculating homes’ average improvement. Long-term care experts we interviewed generally agreed that CMS’s use of QMs to evaluate nursing home quality—and by extension, QIOs’ performance⎯is problematic because of unresolved issues related to the QMs and the MDS data used to calculate them. QMs. As we reported in 2002, the validity of the QMs CMS proposed to publicly report in November 2002 was unclear. Although the validation study commissioned by CMS found that most of the publicly reported QMs were valid and reflected the quality of care delivered by facilities, long- term care experts have criticized the study on several grounds. For example, a 2005 report concluded that (1) the statistical criteria for the validity assessments were not stringent and (2) the researchers did not attempt to determine whether QMs were associated with quality of care at the resident level. As a result, it is not clear whether a resident who triggers a QM (e.g., is assessed as having his or her pain managed inadequately) is actually receiving poor care. The lack of correlation among the QMs⎯a home may score well on some QMs and poorly on others⎯also calls into question their validity as measures of overall quality. Since 2002, CMS has removed or replaced 5 of the original 10 QMs⎯including some of those on which the QIOs were evaluated during the 7th SOW—to address limitations in the QMs, such as reliability and measurement problems. (See app. II for a list of the QMs as of November 2002 and February 2007). Risk adjustment also impacts the validity of QMs. There is general recognition that some QMs should be adjusted to account for the characteristics of residents. However, there is disagreement about which QMs to adjust, what risk factors should be used, or how the adjustment should be made. For example, one expert we interviewed suggested that in many cases pressure ulcers start in hospitals; the pressure ulcer QM does not account for the origin of ulcers. Another expert highlighted the difficulty of making an appropriate adjustment—noting, for example, that improperly risk-adjusting the pressure ulcer QM could mask poor care that contributed to the development of ulcers. MDS. We have also previously reported concerns about MDS reliability⎯that is, the consistency with which homes conduct and code the assessments used to calculate the QMs. CMS awarded a contract for an MDS accuracy review program in 2001 but revamped the program in 2005, near the end of the QIOs’ 7th SOW, acknowledging weaknesses— mainly its reliance on off-site, rather than on-site, accuracy and verification reviews—that we had previously identified. Some states that sponsor on-site MDS accuracy reviews continue to report troubling rates of errors in the data. For example, officials of Iowa’s program reported an average MDS error rate of approximately 24 percent in 2005. Our interviews with long-term care experts and nursing home staff suggested that the chronic pain QM⎯which was selected as a focus of quality improvement work by many QIOs and intensive participant nursing homes⎯may be particularly vulnerable to error in the underlying MDS data. Possible sources of error are systematic differences in the extent to which facilities identify and assess residents in pain and misunderstandings about how to accurately code MDS questions specific to pain. For example, staff from two nursing homes told us that their pain management QM scores improved after staff realized that they had been mistakenly coding residents as having pain even though their pain was successfully managed. Moreover, experts we interviewed noted that higher-quality homes may have worse pain QM scores because they do a better job of identifying and reporting pain in residents. The use of MDS data to measure the quality of care in nursing homes is also problematic because the MDS was not designed as a quality measurement tool and does not reflect advances in clinical practice. CMS is updating the MDS now to address these limitations. For example, instead of asking homes to classify the severity of a pressure ulcer on the basis of a four-stage system, the draft MDS now under review includes a measurement tool intended to more accurately classify the severity of a pressure ulcer. In addition, facilities are asked to indicate whether the pressure ulcer developed at the facility or during a hospitalization. CMS does not yet have an official release date for the revised MDS but anticipates that all validation and reliability testing will be completed by December 2007. Other Measures of Quality. Multiple long-term care professionals we interviewed, including stakeholders and experts on quality measurement, recommended both that the QMs undergo continued refinement and that they not be used in isolation to assess the quality of care in nursing homes. They suggested a number of other sources of information as alternatives or complements to QMs for measuring quality. For example, a representative of the National Quality Forum (NQF), a group with which CMS contracted to provide recommendations on quality measures for public reporting, stated that experts do not consider the QMs sufficient in themselves to rate homes and that the other quality markers⎯such as perceptions of care by family members, residents, and staff; state survey data; and resident complaints⎯also provide useful information about quality of care. Other long-term care professionals we interviewed suggested these and other measures, including nursing home staffing levels and staff turnover and retention rates. Factors such as the existence of other quality improvement efforts make it difficult to evaluate QIOs’ work with nursing homes and attribute quality improvement solely to QIOs. In an assessment of the QIO program during the 7th SOW, CMS and QIO officials acknowledged this difficulty. The assessment found that intensive participants improved more than nonintensive participants on all five QMs studied, and for each QM, intensive participants that worked on the QM improved more than intensive participants that did not. However, the authors noted that these results could not be definitively attributed to the efforts of the QIOs because improvements may have been influenced by a variety of factors, including preexisting differences between intensive participants and nonintensive participants; public reporting of the QMs, which may have focused homes’ attention on improving these measures; and other quality improvement efforts to which homes may have been exposed. As noted earlier in this report, these other efforts included, but were not necessarily limited to, initiatives sponsored by state governments, nursing home trade associations, and CMS. While these other efforts varied considerably in the intensity of technical assistance offered⎯ranging from a trade association- sponsored program that homes characterized as essentially signing a quality improvement pledge, to state-sponsored programs that involved on-site visits by experienced long-term care nurses who provided best- practice guidelines, educational materials, and clinical tools⎯the fact that the efforts were present made it impossible to attribute quality improvements solely to the QIOs. In its 2006 report on all aspects of the QIO program, IOM highlighted similar shortcomings in previous studies of the QIO program and called for more systematic and rigorous evaluations. IOM concluded that although the QIOs may have contributed to improvements in the quality of care, the existing evidence was inadequate to determine the extent of their contribution. In its response to the IOM study, CMS acknowledged the need to strengthen its methods of evaluating the program and outlined plans to convene an evaluation expert advisory panel to make recommendations on the framework for the next contracts (the 9th SOW, which will begin in 2008). CMS also stated that it will collect information during the 8th SOW that will allow it to control for differences in motivation between intensive and nonintensive participants but did not specify the nature of this information. Subsequently, HHS’s Office of General Counsel determined that QIO program regulations prohibited QIOs from providing to CMS the identities of intensive participants. CMS officials acknowledged that this prohibition posed a considerable challenge to their evaluation plans and said that as a short-term solution the agency might contract with one of the QIOs to evaluate the program, with the possible stipulation that the findings be verified by an independent auditor. CMS collected little information about the specific interventions QIOs used to assist nursing homes and acknowledged that the information it did have was not sufficiently comprehensive or consistent to be used to evaluate the interventions’ effectiveness. In general, CMS’s oversight of QIOs’ work on the nursing home component consisted of ensuring that the QIOs produced the reports and deliverables specified in the contracts and appeared on track to meet performance targets. CMS’s primary source of data about QIOs’ interventions was the monthly activity reports the QIOs were required to submit through the Program Activity Reporting Tool (PARTner). In these reports, QIOs were to document the specific interventions they provided to each home, using such activity codes as “on-site support” and “stand-alone workshops on quality improvement.” However, with only seven activity codes for QIOs to choose from, the level of detail in these reports was low. For example, the same code would be used for a full-day visit as for an hour visit. Moreover, because QIOs were not expected to enter any code more than once per month for a home, a code for on-site support could indicate a single visit or multiple visits. The system also captured no information about the content of visits or other interventions. From the perspective of the QIOs, the system was of limited use: More than half of the 52 QIOs surveyed by IOM rated PARTner fair or poor in terms of both value and ease of use. Staff at one QIO we interviewed reported using tracking systems they developed themselves, rather than PARTner, to monitor their work. CMS regional offices and the nursing home satisfaction survey gathered some additional information about the interventions used by QIOs. The CMS regional offices gathered information through telephone calls and visits to the QIOs and by participating in quarterly conference calls during which QIOs and CMS regional and central offices discussed issues related to the nursing home component of the contract. The regional office staff also reviewed information entered into the PARTner data system by QIOs, but they focused their evaluations on QIO contract compliance and not on the effectiveness of specific interventions because—as some regional staff emphasized—the contracts were performance-based, and therefore it was not their place to “micromanage” the QIOs or to advocate for or against specific interventions. Feedback from nursing homes was gathered through the nursing home satisfaction survey, conducted after the midpoint of the contract cycle by a contractor for CMS. The survey collected information about the frequency of, and homes’ satisfaction with, a range of interventions, including on-site visits, training workshops, one-on-one telephone calls, conference calls, one-to-one e-mails, and broadcast e-mails. However, the survey collected no information about the content of these interventions or the aspects that contributed to providers’ satisfaction or dissatisfaction. In its 2006 report on the QIO program, IOM emphasized the need for CMS to gather more information about specific interventions and noted that CMS was uniquely positioned to determine which interventions lead to high levels of quality improvement. The agency responded that it will collect information during the 8th SOW to better explore the relationship between the intensity of assistance provided by the QIO and the level of improvement, but did not specify the type of information it will collect. As of March 2007, CMS had not yet implemented a revamped PARTner system. In addition, the agency cancelled its plans to conduct an initial survey of nursing homes early in the contract period and now plans to conduct only one, later in the contract period. CMS officials explained that the delay and cancellation were due at least in part to the determination by HHS’s Office of General Counsel that QIOs could not provide to CMS the identities of intensive participants to CMS. Although the impact of the QIOs on the overall quality of nursing home care cannot be determined, staff we interviewed at over two-thirds of the 32 nursing homes stated that they improved the care delivered to residents as a result of working intensively with the QIOs. Staff at 23 of the 32 homes told us that they implemented new, or made changes to existing, policies and procedures related to pain or pressure ulcers. Of the 23 nursing homes, staff from 21 stated that they changed the way they addressed resident pain. In general, these changes involved implementing pain scales or new assessment forms. Staff at some facilities noted that working with the QIO heightened staff awareness of resident pain, including awareness of cultural differences in the expression of pain. Staff at 8 of the 23 nursing homes stated that they changed the way they addressed pressure ulcers. In general, these 8 homes implemented new assessment tools, changed assessment plans, or revised facility policies using materials provided by the QIO. (Table 3 provides examples of resident care improvements related to pain assessment and treatment and pressure ulcers.) Staff at 13 of the 32 nursing homes stated that the changes they made as a result of working with the QIOs were sustainable, but staff from some nursing homes noted that staffing turnover at their facilities could affect sustainability. Of the 32 nursing homes we contacted, staff from 4 specifically stated that working with the QIO did not change their quality of care. For example, staff from one home stated that the QIO did not offer their facility any new or helpful information and did not offer feedback on how the facility’s processes could improve. Staff from another home reported that the information provided by the QIO was on techniques their facility had already implemented. Staff at a third home noted that while the QIO was a good resource, the home could have done as much on its own, without assistance from the QIO. Staff at three facilities, none of which reported making any policy or procedural changes, said the facilities experienced worse survey results while working with their QIO; staff from two of the three reported being cited for quality deficiencies in the specific areas they had been addressing with the QIO. Staff at one of these facilities believed they were cited because their work with the QIO had made the surveyor more aware of the facility’s problems in this area. Although it is difficult to evaluate the impact of QIO assistance, the QIO program does have the potential to help improve the quality of nursing home care. CMS program improvements for the 8th SOW, such as the agency’s decision to focus resources on intensive rather than statewide assistance and its plans to improve evaluation, are positive steps that could result in more effective use of available funds and provide more insight into the program’s impact. Our evaluation of assistance provided during the 7th SOW, however, raised two major questions about the future focus, oversight, and evaluation of the QIO program, which we address below. Given the available resources, which homes and quality-of-care areas should CMS direct QIOs to target for intensive assistance? We found that QIOs generally did not target intensive assistance to homes that performed poorly in state surveys, partly because of concerns about the willingness and ability of such homes to simultaneously focus on quality improvement and cooperate with the QIOs. However, the Collaborative Focus Facility project during the 7th SOW demonstrated that low-performing homes could improve their survey results and QM scores; subsequently, CMS required that during the 8th SOW each QIO work with up to three such homes—about 10 percent of the total number that QIOs are expected to assist intensively. Stakeholders we interviewed believed that even more emphasis should be placed on assisting low-performing homes. We found that there was little overlap between homes that participated in the QIO Collaborative Focus Facility project and in CMS’s Special Focus Facility program, which is a program involving about 130 nursing homes nationwide that, on the basis of their survey results, receive increased scrutiny and enforcement by state survey agencies. The limited overlap suggests that each state has more than three low-performing facilities that could benefit from QIO assistance. Targeting assistance to low-performing homes could pose challenges given the voluntary nature of the program—homes must agree to work with a QIO. QIOs maintain that voluntary participation is critical to ensuring homes’ commitment to the program. However, the risk in this approach is that some of the homes that need help most will not get it. Indeed, in the Collaborative Focus Facility project, some of the low-performing homes that were asked to participate refused QIO assistance. In addition, QIOs expended more resources working to improve these low-performing homes than were required to assist better-performing homes. Thus, increasing the number of low-performing homes QIOs are required to assist above the small number mandated for the 8th SOW might necessitate decreasing the total number of homes assisted. However, existing resources might be maximized if QIOs worked with each home only on the quality-of-care areas that pose particular challenges for that home. Could interim steps be taken to improve oversight and evaluation of QIOs’ work with nursing homes before the contracting cycle that begins in August 2008? Currently, CMS collects data primarily on QIO outcomes—specifically, changes in QM scores—and costs. CMS needs more detailed data, particularly about the type and intensity of interventions used to assist nursing homes, to improve its oversight and evaluation of the QIO program. Without such data, CMS cannot hold QIOs fully accountable for their performance under their contract with CMS. Some evaluation activities are now scaled back or on hold because HHS determined early in the 8th SOW that program regulations prohibited the QIOs from providing to CMS the identities of the intensive participants. Such a firewall presents a major impediment to improved oversight and evaluation of the QIO program and prevented CMS from implementing interim changes it planned to make. For example, for the 7th SOW, CMS contracted for one nursing home satisfaction survey to be conducted near the end of the contract period—too late to be of use in interim monitoring of the QIOs’ performance. For the 8th SOW, CMS had planned to contract for two surveys but was forced to cancel the one planned for early in the contract period because it was unable to provide the names of intensive participants to its survey contractor. Moreover, the lack of these data would preclude CMS from independently verifying QIO compliance with such contract requirements as the geographic dispersion of intensive participants in each state. CMS evaluated QIOs’ work with nursing homes primarily on the basis of changes in QM scores; given the weaknesses of QM data, the current reliance on these data appears unwarranted. While CMS actions to improve the MDS instrument as a quality measurement tool are important, the agency has not yet established an implementation date. Although multiple long-term care professionals believe that multiple indicators of quality, including deficiencies on homes’ standard and complaint surveys and residents’ and family members’ satisfaction with care, should be used to measure quality improvement, CMS is not currently drawing on these data sources to evaluate QIOs’ efforts. Recognized shortcomings in these other data sources—such as the understatement of survey deficiencies by state surveyors—underscore the importance of using multiple data sources to evaluate QIO outcomes. To ensure that available resources are better targeted to the nursing homes and quality-of-care areas most in need of improvement, we recommend that the Administrator of CMS take the following two actions: Further increase the number of low-performing homes that QIOs assist intensively. Direct QIOs to focus intensive assistance on those quality-of-care areas on which homes most need improvement. To improve monitoring of QIO assistance to nursing homes and to overcome limitations of the QMs as an evaluation tool, we recommend that the Administrator of CMS take the following three actions: Revise the QIO program regulations to require QIOs to provide to CMS the identities of the nursing homes they are assisting in order to facilitate evaluation. Collect more complete and detailed data on the interventions QIOs are using to assist homes. Identify a broader spectrum of measures than QMs to evaluate changes in nursing home quality. We obtained written comments from CMS on our draft report. CMS addressed three of our five recommendations. It concurred with two of the three recommendations but did not specify how it would implement them, and it continues to explore options for implementing the third recommendation. Our evaluation of CMS’s comments follows the order we presented each recommendation in the report. CMS’s comments are included in app. III. Further increase the number of low-performing homes that QIOs assist intensively. CMS agreed with this recommendation but did not specify a time frame for addressing it or indicate how many low- performing homes it will expect QIOs to assist in the future. Although our report focused on the most recently completed contract period (the 7th SOW), we acknowledged that in the current contract period, CMS required QIOs to provide intensive assistance to some “persistently poor- performing” homes identified in consultation with each state survey agency. However, we pointed out that the number of these homes the QIOs were required to serve was small, accounting for less than 10 percent of the homes they were expected to assist intensively. CMS commented that preliminary estimates from a special study conducted during the 7th SOW indicated that assisting chronically poor-performing homes cost the QIOs 5 to 10 times as much as assisting the “usual” home. Our report acknowledged that additional resources were required for QIOs to assist low-performing homes but suggested that CMS could decrease the total number of homes assisted in order to increase the number of low- performing homes beyond the small number mandated for the 8th SOW. Direct QIOs to focus intensive assistance on those quality-of-care areas on which homes most need improvement. CMS did not directly respond to this recommendation, but did point out that about one-third of QIOs were working primarily with homes on QMs on which the homes scored worse than the national average during the 8th SOW. Our recommendation was to direct all QIOs to focus intensive assistance on QMs that reflect homes’ greatest quality-of-care challenges. We had reported that some nursing homes assisted intensively by QIOs did not have a choice of QMs on which to work. We concluded that having QIOs work intensively with homes only on the quality-of-care issues that posed particular challenges to them would maximize program resources. Revise QIO program regulations to require QIOs to provide CMS with the identities of the homes assisted in order to facilitate evaluation. CMS did not specifically indicate whether it agreed with this recommendation, but did indicate that it continues to explore options which would allow access to data on the homes assisted intensively in order to facilitate evaluation. However, CMS expressed concern that providing this access could potentially subject the information to laws that could afford third parties similar access. We believe that CMS should continue to evaluate how best to maintain an appropriate balance between disclosure and confidentiality. If CMS’s evaluation indicates that it is unable to incorporate adequate confidentiality safeguards to promote voluntary participation in QIOs’ quality improvement initiatives, the agency could seek legislation that would provide such safeguards. Collect more complete and detailed data on the interventions QIOs use to assist homes. CMS responded to this recommendation, although it labeled it “improve the monitoring of QIO activities,” and agreed with our recommendation. CMS noted that, in concert with HHS, it is reviewing recommendations from the IOM’s 2006 report on QIOs, which may result in redesigning the program, including systems for evaluating QIO activities in different care settings, such as nursing homes. CMS did not discuss how it planned to collect additional data on QIO nursing home interventions. Further, it stated that it may incorporate data-handling and -reporting features of the nursing home subtask into overall program improvements. We have reservations about this plan because we found that CMS collected little information about specific QIO interventions with nursing homes during the 7th SOW, the information collected was not sufficiently comprehensive or consistent to be used to evaluate the interventions’ effectiveness, and QIOs themselves reported that the data collection system was of limited use to them. Identify a broader spectrum of measures than QMs to evaluate changes in nursing home quality. CMS did not directly address this recommendation. However, the agency took issue with our judgment that the use of QMs to evaluate nursing home quality—and by extension, QIOs’ performance—is problematic. CMS commented that the QMs have passed through rigorous development, testing, deployment, and national consensus processes. We reported that the study commissioned by CMS to validate the QMs has been criticized by experts on several grounds, including a lack of statistical rigor. We also noted that CMS has revised or is currently revising both the QMs and the MDS data used to calculate them to address limitations, such as reliability and measurement problems. For example, CMS has removed or replaced 5 of the original 10 QMs since 2002, including some of those on which the QIOs were evaluated during the 7th SOW. In addition, CMS is currently updating the MDS to reflect advances in clinical practice and to improve its utility as a quality measure tool. While we expect that these efforts will improve the QMs as measures of nursing home quality, we believe that the QMs’ current limitations argue for the use of a broader spectrum of measures to evaluate changes in nursing home quality. Multiple long-term care professionals we interviewed recommended that the QMs not be used in isolation to assess the quality of care in nursing homes; these professionals suggested a range of measures that could be used to supplement the QMs, including perceptions of care by family members, residents, and staff; state survey data; and nursing home staffing levels. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the Administrator of the Centers for Medicare & Medicaid Services and appropriate congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7118 or allenk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our analysis of QIOs’ work with nursing homes had three major components: (1) site visits to five QIOs, (2) analysis of state survey data to compare homes that were assisted intensively with homes that were not, and (3) a Web-based survey of 51 QIOs. We visited a QIO in each of five states to gather detailed information about QIOs’ work with nursing homes from the perspective of the QIOs, nursing homes in the intensive participant group, and stakeholders; we used this information to address all three objectives. We selected the states-⎯and by extension, the QIOs that worked in those states—on the basis of six criteria described in the following section. After selecting the QIOs, we identified nursing homes that received intensive assistance and stakeholders to contact for interviews. We conducted most of our site visit interviews in March and April 2006. We based our selection of QIOs on the following criteria: Number of nursing home beds in the state. We divided the states into three groups of 17 states each based on the number of nursing home beds at the beginning of the 7th SOW (2002). We over-sampled states with high numbers of nursing home beds by selecting one state with a low number of beds, one state with a medium number, and three states with a high number. Evaluation score for the nursing home component of the 7th SOW relative to scores of other QIOs. We divided the states into three groups of 17 based on the QIOs’ evaluation scores for the 7th SOW. To help us identify the possible determinants of scores, we selected more states at each end of the spectrum than in the middle: two states with scores in the bottom third, one state with a score in the middle third, and two states with scores in the top third. State survey performance of homes selected for intensive assistance relative to homes not selected. We also considered the extent to which the homes selected for intensive assistance by a given QIO at the beginning of the 7th SOW differed from the homes that were not selected, in terms of serious deficiencies cited on state surveys (both the proportion of homes cited in each group and the average number of serious deficiencies per home). We chose one QIO that selected worse homes, three QIOs that selected homes that were neither better nor worse, and one QIO that selected better homes. Presence of a state-sponsored nursing home quality improvement program. At the time we selected QIOs for site visits, we were aware of four states that had state-sponsored quality improvement initiatives in place during the 7th SOW. To learn more about these efforts and how they interacted with and compared with efforts by the QIOs, we included one state (Florida) with its own quality improvement initiative. After we made our selection, we learned that another state we had selected (Maine) had a state-sponsored quality improvement program. QIO participation in the Collaborative Focus Facilities project. CMS has funded QIOs to conduct several special studies with nursing homes, including one in which the 17 participating QIOs each worked intensively with up to five nursing homes identified by their state survey agencies as having significant quality problems. To learn more about the challenges involved in working with low-performing homes, we selected two states whose QIOs participated in this project. Census region. We selected states from four different regions of the country: Northeast, Midwest, South, and West. Using these criteria, we selected the following five states: Colorado, Florida, Iowa, Maine, and New York. Together these states represented 15 percent of nursing home beds nationwide at the beginning of the 7th SOW (2002). Overall, we interviewed staff from 32 nursing homes in nine states. To assist in the development of our site visit protocols, we interviewed staff from 4 homes in four states. During the site visits to five states, we interviewed staff from 28 nursing homes. In each state, we interviewed staff from 4 to 8 nursing homes that received intensive assistance from the QIO, for a total of 28 homes in these five states. The number of homes we selected in each of the five states visited varied depending on the number of homes the QIO was expected to select for intensive assistance, an expectation based on the number of homes in the state. Specifically, we selected either four homes or 7 percent of the maximum number of homes that each of the five QIOs was expected to assist intensively, whichever was greater. We chose homes on the basis of four characteristics: number of serious deficiencies in the standard state survey at the beginning of the 7th SOW (2002), improvement in QM scores during the 7th SOW, distance from the QIO (in order to include homes that were more difficult for QIOs to visit), and urban versus rural location. Specifically, we sought to include (1) at least one home that had one or more serious deficiencies and that finished in the top third of the intensively assisted homes in their state in terms of improvement on QM scores, and (2) at least one home that had one or more serious deficiencies and that finished in the bottom third of the intensively assisted homes in their state in terms of improvement on QM scores. For the remaining homes, we sought a group whose state survey deficiency levels and QM improvement scores were representative of the range among intensive participants in their state. However, the experiences of this sample of 32 homes cannot be generalized to the entire group of homes that received intensive assistance from the QIOs nationwide. In each state we also interviewed officials from three stakeholder groups: (1) the state survey agency; (2) the local affiliate of the American Health Care Association, which generally represents for-profit homes; and (3) the local affiliate of the American Association of Homes and Services for the Aging, which represents not-for-profit homes. To assess the characteristics of the nursing homes that were selected by the QIOs for intensive assistance from among the homes that volunteered, we analyzed 3 years of standard state survey data on deficiencies cited at nursing homes and compared the results for homes that were assisted intensively with results for homes that were not; we used this information to address our first objective. The analysis involved three steps: 1. identifying nursing homes that had three standard state surveys from 1999 through 2002; 2. ranking nursing homes in each state in each year, based on the number of serious and other deficiencies, and then classifying homes as consistently low-, moderately, or high-performing; and 3. identifying on a nationwide and state-by-state basis any statistically significant differences between homes selected and not selected by the QIO, in terms of the proportion of low-, moderately, or high-performing nursing homes. To identify homes whose performance was consistently lower or higher than other homes in their state prior to the selection of homes by the QIOs, we included in our analysis only homes for which we were able to identify three standard surveys from January 1, 1999, through November 1, 2002. Using the state survey calendar year summary files for 1999 through 2002 for the 50 states and the District of Columbia, we obtained 3 years of deficiency data from standard surveys for 16,303 homes. CMS classifies deficiencies according to their scope and severity. For each of the three surveys, we ranked all of the nursing homes in each state based on the number of deficiencies in two categories: (1) actual harm or immediate jeopardy and (2) potential for more than minimal harm. Deficiencies in the first category are considered serious deficiencies. We gave more weight to the serious deficiencies by sorting the homes first on the number of deficiencies in the first category and then on the number of deficiencies in the second category. Homes with the same number of deficiencies in each category were assigned the same rank. Based on these rankings, we identified homes in the bottom and top quartile in each state in each survey. We classified homes as low-performing if they ranked in the bottom quartile in the most recent of the three surveys and in at least one of the two preceding surveys. We classified homes as high-performing if they ranked in the top quartile in the most recent of the three surveys and in at least one of the two preceding surveys. We classified homes as moderately performing if they did not meet the criteria for inclusion in either the low- or high-performing group. Of the 16,303 homes with three standard state surveys during the period we specified, we classified 15 percent as low- performing, 65 percent as moderately performing, and 20 percent as high- performing. To assess the stability of our categorization of homes as low- (or high-) performing, we ran a logistic regression model to predict the probability of a home being categorized as low- (or high-) performing in the most recent of the three surveys given its categorization in the two prior surveys. The regression results showed that homes that were categorized as low- (or high-) performing in one survey were significantly more likely to be categorized as low- (or high-) performing in the other surveys as well. Our final step was to determine, on both a nationwide and state-by-state basis, whether there was a statistically significant difference in the proportion of (1) low-performing homes, (2) moderately performing homes, and (3) high-performing homes in the group assisted intensively by the QIOs compared with the group not assisted intensively. To gather information about the characteristics of the QIOs, including their process for selecting homes for intensive assistance from the pool of volunteers and the interventions they used, on July 19, 2006, we launched a two-part Web-based survey of QIOs in all 50 states and the District of Columbia; we used this information to address objectives one and two. We achieved a 100 percent response rate. The first part of the survey gathered information about the primary personnel who worked with nursing homes during the 7th SOW, including information about their employment with the QIO, and their relevant credentials and experience. The second part of the survey gathered information on a range of other topics, including information about stakeholder involvement with the QIO, recruitment and selection of nursing homes for intensive assistance, interventions used with intensive participants, interventions used with homes statewide, and QIOs’ communication with CMS. We specifically inquired about QIOs’ use of six interventions: (1) mailings, faxes, and e- mails; (2) conferences; (3) small group meetings; (4) conference calls and video or Web conferences with multiple homes; (5) telephone conversations with individual homes; and (6) on-site visits. We asked QIOs to rank and provide information on the two interventions they relied on most to assist homes statewide and on the three interventions they relied on most to assist homes in the intensive participant group. We also asked QIOs to rank the effectiveness of the interventions they used and to identify the interventions they would use if they could do the 7th SOW over again. In November 2002, CMS began a national Nursing Home Quality Initiative that included the development of QMs that would be publicly reported on the CMS Web site called Nursing Home Compare. CMS has continued to refine the QMs and, as shown in table 4, has dropped some QMs and added others. In addition to the contact named above, Walter Ochinko, Assistant Director; Nancy Fasciano; Sara Imhof; Elizabeth T. Morrison; Colbie Porter; and Andrea Richardson made key contributions to this report. Nursing Homes: Efforts to Strengthen Federal Enforcement Have Not Deterred Some Homes from Repeatedly Harming Residents. GAO-07-241. Washington, D.C.: March 26, 2007. Nursing Homes: Despite Increased Oversight, Challenges Remain in Ensuring High-Quality Care and Resident Safety. GAO-06-117. Washington, D.C.: December 28, 2005. Nursing Home Deaths: Arkansas Coroner Referrals Confirm Weaknesses in State and Federal Oversight of Quality of Care. GAO-05-78. Washington, D.C.: November 12, 2004. Nursing Home Fire Safety: Recent Fires Highlight Weaknesses in Federal Standards and Oversight. GAO-04-660. Washington D.C.: July 16, 2004. Nursing Home Quality: Prevalence of Serious Problems, While Declining, Reinforces Importance of Enhanced Oversight. GAO-03-561. Washington, D.C.: July 15, 2003. Nursing Homes: Public Reporting of Quality Indicators Has Merit, but National Implementation Is Premature. GAO-03-187. Washington, D.C.: October 31, 2002. Nursing Homes: Quality of Care More Related to Staffing than Spending. GAO-02-431R. Washington, D.C.: June 13, 2002. Nursing Homes: More Can Be Done to Protect Residents from Abuse. GAO-02-312. Washington, D.C.: March 1, 2002. Nursing Homes: Federal Efforts to Monitor Resident Assessment Data Should Complement State Activities. GAO-02-279. Washington, D.C.: February 15, 2002. Nursing Homes: Sustained Efforts Are Essential to Realize Potential of the Quality Initiatives. GAO/HEHS-00-197. Washington, D.C.: September 28, 2000. Nursing Home Care: Enhanced HCFA Oversight of State Programs Would Better Ensure Quality. GAO/HEHS-00-6. Washington, D.C.: November 4, 1999. Nursing Home Oversight: Industry Examples Do Not Demonstrate That Regulatory Actions Were Unreasonable. GAO/HEHS-99-154R. Washington, D.C.: August 13, 1999. Nursing Homes: Proposal to Enhance Oversight of Poorly Performing Homes Has Merit. GAO/HEHS-99-157. Washington, D.C.: June 30, 1999. Nursing Homes: Complaint Investigation Processes Often Inadequate to Protect Residents. GAO/HEHS-99-80. Washington, D.C.: March 22, 1999. Nursing Homes: Additional Steps Needed to Strengthen Enforcement of Federal Quality Standards. GAO/HEHS-99-46. Washington, D.C.: March 18, 1999. California Nursing Homes: Care Problems Persist Despite Federal and State Oversight. GAO/HEHS-98-202. Washington, D.C.: July 27, 1998.
In 2002, CMS contracted with Quality Improvement Organizations (QIO) to help nursing homes address quality problems such as pressure ulcers, a deficiency frequently identified during routine inspections conducted by state survey agencies. CMS awarded $117 million over a 3-year period to the QIOs to assist all homes and to work intensively with a subset of homes in each state. Homes' participation was voluntary. To evaluate QIO performance, CMS relied largely on changes in homes' quality measures (QM), data based on resident assessments routinely conducted by homes. GAO assessed QIO activities during the 3-year contract starting in 2002, focusing on (1) characteristics of homes assisted intensively, (2) types of assistance provided, and (3) effect of assistance on the quality of nursing home care. GAO conducted a Web-based survey of all 51 QIOs, visited QIOs and homes in five states, and interviewed experts on using QMs to evaluate QIOs. Although more homes volunteered to work with the QIOs than CMS expected them to assist intensively, QIOs typically did not target their assistance to the low-performing homes that volunteered. Most QIOs' primary consideration in selecting homes was their commitment to working with the QIO. CMS did not specify selection criteria for intensive participants but contracted with a QIO that developed guidelines encouraging QIOs to select committed homes and exclude those with many survey deficiencies or QM scores that were too good to improve significantly. Consistent with the guidelines, few QIOs targeted homes with a high level of survey deficiencies, and eight QIOs explicitly excluded these homes. GAO's analysis of state survey data confirmed that selected homes were less likely than other homes to be low-performing in terms of identified deficiencies. Most state survey and nursing home trade association officials interviewed by GAO believed QIO resources should be targeted to low-performing homes. QIOs were provided flexibility both in the QMs on which they focused their work with nursing homes and in the interventions they used. Most QIOs chose to work on chronic pain and pressure ulcers, and most used the same interventions⎯conferences and distribution of educational materials⎯to assist homes statewide. The interventions used to assist individual homes intensively varied and included on-site visits, conferences, and small group meetings. Just over half the QIOs reported that they relied most on on-site visits to assist intensive participants. Sixty-three percent said such visits were their most effective intervention. Of the 15 QIOs that would have changed the interventions used, most would make on-site visits their primary intervention. Homes indicated that they were less satisfied with the program when their QIO experienced high staff turnover or when their QIO contact possessed insufficient expertise. Shortcomings in the QMs as measures of nursing home quality and other factors make it difficult to measure the overall impact of the QIOs on nursing home quality, although staff at most of the nursing homes GAO contacted attributed some improvements in the quality of resident care to their work with the QIOs. The extent to which changes in homes' QM scores reflect improvements in the quality of care is questionable, given the concerns raised by GAO and others about the validity of the QMs and the reliability of the resident assessment data used to calculate them. In addition, quality improvements cannot be attributed solely to the QIOs, in part because the homes that volunteered and were selected for intensive assistance may have differed from other homes in ways that would affect their scores; these homes may also have participated in other quality improvement initiatives. Ongoing CMS evaluation of QIO activities for the contract that began in August 2005 is being hampered by a 2005 Department of Health and Human Services decision that QIO program regulations prohibit QIOs from providing to CMS the identities of homes being assisted intensively.
As part of the Restructuring Act, the Congress enacted section 1203, which provides for the firing of IRS employees who have been proven to commit any of 10 acts or omissions in the performance of their official duties, unless a mitigated penalty is appropriate. These 10 acts or omissions, which are shown below, can be divided into 2 that relate to IRS employees’ tax compliance in filing tax returns and reporting tax liability, and 8 that relate to employee and taxpayer rights. Specifically, these acts or omissions are (1) willful failure to obtain the required approval signatures on documents authorizing a seizure of a taxpayer’s home, personal belongings, or business assets; (2) providing a false statement under oath with respect to a material matter involving a taxpayer or taxpayer representative; (3) violating the rights protected under the Constitution or the civil rights established under six specifically identified laws with respect to a taxpayer, taxpayer representative, or other employee of the IRS; (4) falsifying or destroying documents to conceal mistakes made by any employee with respect to a matter involving a taxpayer or taxpayer representative; (5) assault or battery of a taxpayer, taxpayer representative, or employee of the IRS, but only if there is a criminal conviction, or a final judgment by a court in a civil case, with respect to the assault or battery; (6) violating the Internal Revenue Code, Department of Treasury regulations, or policies of the IRS (including the Internal Revenue Manual) for the purpose of retaliating against, or harassing, a taxpayer, taxpayer representative, or other employee of the IRS; (7) willful misuse of the provisions of section 6103 of the Internal Revenue Code for the purpose of concealing information from a congressional inquiry; (8) willful failure to file any return of tax required under the Internal Revenue Code on or before the date prescribed therefore (including any extensions), unless such failure is due to reasonable cause and not to willful neglect; (9) willful understatement of federal tax liability, unless such understatement is due to reasonable cause and not to willful neglect; and (10) threatening to audit a taxpayer for the purpose of extracting personal gain or benefit. The Restructuring Act provided the Commissioner with sole discretion, which he cannot delegate, to determine whether to take a personnel action other than firing an employee (i.e., mitigation) for a section 1203 violation. Such determination may not be appealed in any administrative or judicial proceeding. The process for receiving, investigating, and adjudicating section 1203 allegations involves TIGTA and IRS. Under the section 1203 process, revised in March 2002, TIGTA has primary responsibility for receiving and investigating the allegations, except for those that IRS receives and investigates. For example, IRS’s Employee Tax Compliance (ETC) unit, using a computer match, has primary responsibility for identifying and investigating employee tax compliance issues. Also, IRS’s Office of Equal Employment Opportunity (EEO) is to analyze EEO settlement agreements, findings of discrimination, and taxpayer complaints of discrimination to identify whether a potential section 1203 civil rights violation exists. IRS is responsible for adjudicating all section 1203 allegations that are substantiated as violations. Generally, each allegation of a potential section 1203 violation must be initially evaluated to determine whether it merits a full investigation. Then, if an investigation of an allegation uncovers sufficient facts to substantiate it (i.e., support a section 1203 violation), the employee is to be issued a letter notifying him or her of the proposed firing from IRS. The employee has a right to respond to the letter. Afterwards, if the deciding official determines that the evidence sustains the alleged violation, a board established by the IRS Commissioner must review the case to determine whether a penalty less than firing is appropriate. If the board does not find mitigation to be appropriate, the case is not submitted to the IRS Commissioner and the employee is fired. If the board recommends mitigation, the Commissioner must consider it. If the Commissioner mitigates the penalty, other disciplinary actions, such as counseling, admonishment, reprimand, or suspension may be applied. Details on the process are provided in appendix V. According to IRS senior management, the misconduct addressed in section 1203 has always been regarded as serious and subjected to disciplinary action. Prior to the enactment of section 1203, the general rules for imposing discipline required a deciding official to consider a wide range of factors in arriving at the appropriate disciplinary action. Enactment of section 1203 eliminated the variation in penalty for substantiated misconduct, requiring the employee to be fired unless the Commissioner mitigates that penalty. The IRS Commissioner has expressed concerns over the appropriateness of the mandatory firing penalty, especially when an IRS employee had already paid his or her tax liability or when the allegation involves just IRS employees. To address the concerns, IRS, through the Department of the Treasury, is seeking legislation to amend section 1203 by eliminating this penalty for (1) the late filing of tax returns for which a refund is due and (2) action by IRS employees that violate another employee’s rights. In addition, IRS requested that the Commissioner be able to use a range of penalties aside from firing employees, for the types of misconduct under section 1203. Further, because of the associated seriousness and sensitivity over privacy issues, IRS also asked that the unauthorized inspection of returns or return information be added to the list of violations under section 1203. To determine the number, type, and disposition of section 1203 allegations, we analyzed data from IRS’s Automated Labor and Employee Relations Tracking System (ALERTS) database as of September 30, 2002. The data included all section 1203 cases that had originated in IRS, as well as some cases that originated in TIGTA and were either investigated or referred to IRS for investigation or adjudication. On the basis of IRS information on its quality control checks of the data, the use of the data, and our review of the database, we determined that the data were sufficiently reliable to determine the number, type, and disposition of section 1203 allegations. To determine IRS employees’ perceptions of how section 1203 has affected their interactions with taxpayers, we surveyed a stratified random sample of IRS frontline enforcement employees nationwide. Those audit or collection employees included revenue agents, revenue officers, tax compliance officers, and tax auditors from IRS’s Small Business and Self- Employed Division (SB/SE). We asked questions about their understanding and perceptions of section 1203 and its impacts on their jobs. We sent the survey to 455 eligible frontline enforcement employees, of which 350 responded via regular mail, fax, or the Internet between July and September 2002, for a response rate of 77 percent. We also did a content analysis of written comments volunteered by 208 respondents to arrive at a limited number of content categories. A copy of the survey instrument and a summary of the content categories are included in appendixes III and IV. To identify what problems, if any, IRS and TIGTA have encountered in processing section 1203 cases and the extent to which they have addressed them, we reviewed IRS’s and TIGTA’s policies and procedures for receiving, investigating, and adjudicating section 1203 allegations. We also interviewed IRS and TIGTA officials who are responsible for the section 1203 process. In addition, we reviewed a study done by IRS, TIGTA, and a private consulting firm to streamline the section 1203 process, and discussed the study with their officials. To understand the process and gauge the length of time that section 1203 cases take to process, we reviewed 92 of the 100 most recently closed cases as of August 30, 2002, according to IRS’s ALERTS database; in 5 cases, the files could not be located for employees who retired or otherwise left IRS and 3 cases were duplicates. We recorded dates and decisions for various stages of the process. We did not attempt to measure the effectiveness of section 1203 and whether its impacts on IRS employees were positive or negative. Appendix I contains more detailed information on our survey design and administration and case file review approaches. We conducted our review in Washington, D.C., from November 2001 to December 2002 in accordance with generally accepted government auditing standards. IRS data show that, with the exception of employees’ tax compliance provisions, few of the 3,970 section 1203 allegations received between July 1998 and September 2002 were substantiated as violations of section 1203 and resulted in an employee’s firing. Table 1 shows what happened to the 3,970 allegations in terms of completed investigations, substantiated allegations, and firings. Table 1 shows that IRS or TIGTA had finished investigating 3,512 allegations and substantiated 419 as violations, for which IRS fired 71 employees. Of the other 348 violations, IRS’s Commissioner mitigated the penalty for 166; the employees resigned or retired for 117; the employees were fired on other grounds or during their probationary period for 33; and IRS had not finalized the decision for another 32. Appendix II shows the dispositions of all 419 violations by type of section 1203 misconduct and the grade level of the 71 fired employees. Table 1 also shows that most of the violations and related firings involved the two tax compliance provisions of section 1203. The failure to file tax returns on time and the understatement of federal tax liability accounted for 388 of the 419 violations (93 percent) and 62 of the 71 firings (87 percent). The rest of the violations and related firings involved the remaining 8 provisions, which deal with employee and taxpayer rights. IRS officials said that the bulk of the violations and firings involved the two tax compliance provisions of section 1203 because IRS has a systemic computerized process to identify and evaluate potential employee tax compliance issues. Further, according to officials, these issues generally are more factually based and involve clearer indicators of misconduct. To understand why 3,093 investigated allegations were not substantiated, we analyzed IRS data and talked with IRS officials. As shown in appendix II, 800 of these investigated allegations were not substantiated as section 1203 violations but were substantiated as misconduct violations unrelated to section 1203. Of those remaining, 1,549 involved allegations of retaliation and harassment of a taxpayer, taxpayer representative, or IRS employee. Although IRS had not done a systematic analysis, IRS officials offered possible reasons why these investigated allegations could not be substantiated as section 1203 violations. These officials said that many were not credible. For example, the officials cited cases in which a taxpayer representative routinely lodged allegations whenever enforcement employees contacted clients. Another cited example was when taxpayers’ allegations had more to do with their protests about having to meet their tax obligations. Our survey indicated that most frontline enforcement employees understood but feared section 1203, and that, because of section 1203, their work takes longer and the likelihood of their recommending a seizure decreased. Otherwise, employees’ reported views were not as strong on the impacts of section 1203 on other audit or collection activities. At the same time, many employees said that, other factors, such as IRS’s reorganization, have had a greater impact on their ability to do their jobs than section 1203. The overwhelming majority of frontline enforcement employees reported that they understood the types of misconduct covered by section 1203. Figure 1 shows that for 9 of the 10 provisions, at least three-quarters of the employees said they had a very or generally clear understanding of misconduct under section 1203. For the provision on the misuse of section 6103 to conceal information from a congressional inquiry—about 68 percent of the employees said they had a very or generally clear understanding of misconduct covered by section 1203. In addition, an estimated 48 percent of the employees said that IRS had provided, to a very great or great extent, clear examples of what constitutes harassment or retaliation under section 1203. Only about 7 percent said that IRS provided such examples to little or no extent. The majority of employees reported fears associated with section 1203. As shown in figure 2, at least two-thirds reported that they were somewhat or very fearful of having a taxpayer file an allegation and being investigated. Almost as many said they were somewhat or very fearful of being fired. Written comments, while not representative of all respondents, provide some insights on employees’ fears. For example, several employees described fears of being falsely accused by a taxpayer while others noted a fear of being investigated for making an honest mistake. A number of employees expressed more general fears of section 1203. For example, one employee wrote, “I acknowledge that my fears may be irrational, and I would hope that the system would work as it is designed. I could envision a complaint (unfounded, I would hope) being filed, and the resulting anxiety would be overwhelming.” Further, the survey revealed that most frontline enforcement employees had little or no confidence in the disciplinary process for section 1203. For example, an estimated 50 percent of the employees said they are not at all confident and 18 percent reported that they had little confidence that they will not be disciplined for making an honest mistake. IRS officials said that they believe the fear and distrust of section 1203 is pervasive among all types of frontline enforcement employees. However, they indicated that those most affected and concerned are revenue officers who have face-to-face contacts with delinquent taxpayers. Many frontline enforcement employees perceived that section 1203 contributed to work taking longer and to a decline in seizure activity. Otherwise, employees reported views that were not as strong on the impacts of section 1203 on other frontline enforcement activities, such as those associated with audits or collections. Such perceptions are important because IRS management believes that declines in enforcement activities since 1998 resulted, in part, from employees’ reluctance to use enforcement tools due to section 1203 fears. Our survey results on employees’ perceptions of changes in job behavior are broadly correlated with actual declines in enforcement activities, such as seizures. However, this broad correlation should be interpreted with caution because employee perceptions do not necessarily demonstrate causation and section 1203 is unlikely to be the only reason for the decline in enforcement activity. Further, any changes in enforcement activity could be positive or negative, depending on whether the activity was merited. One job behavior that employees reported being affected by section 1203 was the time spent to do their work. An estimated 80 percent of frontline enforcement employees said that work took longer as a result of section 1203. Some written comments helped to illustrate why employees believed their work takes longer. For example, one employee wrote, “[I am] more cautious more time to avoid harassment allegations.” Another said, “the greatest impact has been on the amount of time necessary to work a case—ensuring that taxpayer rights are made clear and protected through every step.” In addition, many employees responsible for collections, such as issuing seizures, liens, and levies, said that section 1203 has affected how they do their jobs. As figure 3 shows, an estimated 67 percent of the collection employees said that the likelihood of their recommending a seizure of taxpayer assets to satisfy a tax debt had decreased (including somewhat or greatly); reported views were not as strong on the likelihood of recommending a levy or lien decreasing. The written comments helped to illustrate why collection employees said they were less likely to take collection actions. Several employees indicated that they second-guess their decisions as a result of section 1203. One employee wrote, “ has forced me to doubt my own judgment on enforcement matters, especially . . . where some issues are vague and the collection officer has to use his or her judgment.” Another employee noted, “ 1203 has made me hesitant to take any action and has slowed work progress since each and every action has the potential to create a section 1203 violation. There is so much information that we are responsible to know and any act, willful or not, can result in a disciplinary action.” Employees reported views that were not as strong on the impacts of section 1203 on other frontline enforcement activities. For example, figure 4 shows that except for one action—contacting a third party—roughly half or more than half of the employees reported that section 1203 had no impact on the likelihood of their taking actions that can be associated with audits such as requesting, reviewing, or questioning documents submitted by taxpayers. Many IRS frontline enforcement employees also reported that IRS’s reorganization and tax law changes have had a greater impact on their ability to do their jobs than section 1203. As shown in figure 5, a higher percentage of employees reported that IRS’s reorganization and tax law changes have had a greater impact rather than a lesser impact on their ability to do their jobs compared to section 1203. Some written comments indicated employee’s perceptions on how the other factors had an effect on their ability to do their jobs. For example, one employee wrote, “The restructuring has created areas where there is no accountability. Frontline employees have nowhere to go when not receiving services, as the person providing the service is in a different division . . . .” Another wrote, “The ongoing complex tax law changes in conjunction with the threat of losing your job (under section 1203) if you don’t correctly implement all of the changes is what greatly impacts our ability to do the job.” IRS officials indicated that the impacts of section 1203 on employees cannot be isolated from those of such factors as IRS’s reorganization and tax law changes because they are interrelated. For example, the officials said that section 1203 itself is part of the reorganization and is a tax law change that some view as complex. As figure 6 shows, we estimate that at least 60 percent of the enforcement employees perceived section 1203 as promoting some degree of employee accountability and respect for taxpayer rights. We also estimate that about 30 percent of the employees perceived section 1203 as doing little or nothing to promote accountability or respect for taxpayer rights. Some written comments indicated ways that employees perceived section 1203 as promoting employee accountability and respect for taxpayer rights. One employee wrote, “These changes were needed and . . . it has been a change for the better and hopefully has increased our trust and faith in the general public, our clients, the taxpayers.” Another employee noted, “Section 1203 make IRS employees accountable and promotes respect for taxpayers . . . .” In other written comments, however, some employees offered their perceptions of how section 1203 did little or nothing to promote employee accountability or to promote taxpayer rights. For example, one employee wrote, “Employees who safeguard taxpayers’ rights are those who would have anyway—section 1203 did not affect that.” Another noted, “We have . . . always been aware of and made every effort to respect the taxpayer’s rights. 1203 does not enhance taxpayer’s rights or . . . efforts to ensure those rights are honored.” IRS and TIGTA have taken steps intended to correct known problems, such as lengthy investigations and conflicts of interest during investigations, that may have reduced the effectiveness of the section 1203 process as well as the morale and productivity of enforcement employees. However, the extent to which these steps have succeeded is unknown because IRS and TIGTA have not coordinated on an approach for evaluating the section 1203 process on the basis of consistent types of results-oriented goals, measures, and performance data. Until IRS and TIGTA develop a coordinated approach to ensure consistent and valid evaluation, they cannot determine the effectiveness of the entire section 1203 process or any changes to it. IRS and TIGTA made changes to address problems with the process for receiving, investigating, and adjudicating section 1203 allegations. IRS initially identified some of these problems through a limited review to check employee concerns that section 1203 cases were not being resolved in a timely manner. The review revealed that, on average, IRS investigations took over 200 days and TIGTA investigations took over 300 days. In October 2001, IRS and TIGTA initiated a more comprehensive study to assess the causes of lengthy processing times and identify other problems associated with the process for receiving, investigating, and adjudicating section 1203 cases. A team of IRS, TIGTA, and private consulting firm officials did the study, which resulted in recommendations to reengineer the process to improve performance. The team issued a final report in January 2002. The team identified several problems with the section 1203 process, such as cases changing hands frequently within and between IRS and TIGTA and use of multiple and inconsistent procedures for processing section 1203 allegations. The team developed recommendations to correct the problems and improve the section 1203 process. On the basis of the recommendations, IRS implemented some changes in March 2002. Table 2 lists the problems identified by the team, its recommended actions, and actions taken. Although many of the team’s recommendations were implemented, some were not implemented or were modified. IRS and TIGTA officials said that modifications resulted because both agencies agreed, after the recommendations were developed, that TIGTA would be more involved in screening and investigating most allegations. For example, IRS modified the recommendation to create a BEPR that would receive section 1203 allegations, determine their investigative merit, and oversee the section 1203 process. IRS had created BEPR to handle these duties because IRS and TIGTA had not agreed on the extent of TIGTA’s involvement. By the time that the new process was implemented, IRS and TIGTA had agreed that TIGTA would handle allegations for section 1203, with some exceptions. As a result, BEPR’s responsibility was limited to determining the merit of only those allegations forwarded to it by TIGTA and did not include oversight of the whole section 1203 process. IRS officials said that having two independent agencies responsible for different parts of the section 1203 process complicates having one agency responsible for overseeing the other agency. Rather than creating a centralized database, IRS and TIGTA officials described plans to modify an existing database to allow certain section 1203 data to be downloaded and shared between IRS and TIGTA. To do this, IRS has hired a contractor to develop such integrated data sharing. IRS officials said they plan to begin testing and implementing this new system sometime in 2003. Both IRS and TIGTA officials said that creating a centralized database for section 1203 cases would not be efficient or practical since both agencies use their respective databases to track various types of employee misconduct cases—not just those relating to section 1203. In addition, TIGTA officials said that sharing one database could compromise the integrity of TIGTA’s investigations, given the sensitivity of certain case information. IRS officials said that the study did not make specific recommendations to address the multiple, inconsistent procedures. These officials said that they believe that the attempts to streamline the process will help to address these problems. For example, the new process clarifies that TIGTA is to be responsible for receiving and investigating most section 1203 allegations. IRS reflected the new process in a revised section 1203 handbook that eliminated some criteria on making various decisions (e.g., mitigation). IRS officials said that they did not retain these criteria because all IRS employees did not need such details. They indicated that they plan to begin developing customized guidelines during early 2003 for targeted audiences, such as labor relation specialists. IRS and TIGTA have not coordinated on an approach for evaluating the section 1203 process on the basis of consistent types of results-oriented goals, measures, and performance data. Until IRS and TIGTA develop a coordinated approach to ensure consistent and valid evaluation, IRS and TIGTA cannot determine the effectiveness of the entire section 1203 process or any changes to it, such as those made in March 2002. We have issued a number of reports on the value added to agency operations by using results-oriented goals and balanced measures to guide and evaluate performance, avoid focusing on one aspect of performance at the expense of others, and ensure that any changes to a program or process are having the desired results rather than unintended consequences. These reports also have discussed the value of planning evaluations of performance of a program or process early so that arrangements can be made to ensure collection of the needed data. IRS and TIGTA have not developed agreed-upon goals or measures for evaluating the effectiveness of the section 1203 process or means for collecting related performance data. For example, IRS has not established goals or measures for timely adjudication of section 1203 cases and does not collect information on the amount of time to adjudicate cases. To obtain a current view on section 1203 case processing time, we analyzed 92 of the 100 most recently closed cases in IRS’s database by the end of August 2002. Our analysis showed that the median number of days involved in the process was 186 days and that 80 percent of the cases ranged between 78 days and 774 days. IRS officials said that they do not have a formal system for evaluating the section 1203 process—including goals and measures—because IRS does not have such a system for any of its employee disciplinary processes. TIGTA officials indicated that TIGTA has a strategic goal of 120 days to investigate and refer all administrative cases to IRS and a 365-day goal for all criminal cases. Although such goals can apply to section 1203 investigations, TIGTA officials said that they have not evaluated whether its section 1203 investigations have met these goals. Without such performance indicators, IRS and TIGTA cannot determine whether the new process corrected the known problems and improved the section 1203 process as intended—that is, to reduce the number of handoffs, shorten the processing time, and eliminate conflicts of interest. Further, IRS and TIGTA cannot determine how effectively they process section 1203 allegations or whether future changes to the section 1203 process will be needed. During December 2002, IRS officials told us they plan to develop goals and measures for evaluating all IRS disciplinary processes, including section 1203. Although they could not provide documentation on how this evaluation system would work, they said they plan to implement the evaluation system during fiscal year 2003. On the basis of informal tracking, they said that they believe that the new section 1203 process has expedited the determination of investigative merit and adjudication of violations. They acknowledged the value of having objective data on section 1203 and believed that this informal tracking system can be used to help develop appropriate goals and measures for the formal evaluation system. The Congress included section 1203 in the Restructuring Act, in part, to minimize certain types of IRS employee misconduct in dealing with taxpayers. On the basis of our survey results, most IRS enforcement employees do perceive that section 1203 has affected their behavior, such as taking longer to work audit or collection cases and having some reluctance to take enforcement actions. The survey results by themselves, however, do not provide a basis for conclusions about whether section 1203 has worked or should be changed. On the one hand, their perceptions about longer case times and a reluctance to take action are consistent with the fear of section 1203 felt by many enforcement employees. On the other hand, any increase in the amount of time to work cases also could result from other impacts of section 1203 seen by employees, such as promoting increased employee accountability and respect for taxpayer rights. Moreover, policymakers might be willing to accept longer case times and some fear of taking enforcement actions when merited if the tradeoff is greater respect for taxpayer rights. One influence on how enforcement employees perceive section 1203 is the IRS and TIGTA process for handling section 1203 allegations. However, our survey found widespread distrust of the process. Further, IRS and TIGTA recognized that problems with the section 1203 process were affecting employee morale and productivity. Consequently, they implemented a new process in March of 2002. Evaluation of the new process is important because of the potential impact on IRS employees and ultimately taxpayers. While too few section 1203 cases have been closed under the new process for an evaluation to date, IRS and TIGTA have not developed an evaluation approach. Any evaluation of effectiveness would have to be based on results-oriented goals and related performance measures. Developing an approach now would help ensure timely collection of the needed data. We recommend that the Acting Commissioner of Internal Revenue and the Acting Treasury Inspector General for Tax Administration coordinate on an approach for evaluating the section 1203 process. In developing this approach, IRS and TIGTA also should develop (1) results-oriented goals for processing section 1203 cases, (2) performance measures that are balanced and can be used to assess progress towards those goals, and (3) methods for collecting and analyzing performance data related to the goals and measures. On February 6, 2003, the Acting Commissioner of the Internal Revenue and the Acting Treasury Inspector General for Tax Administration each provided written comments on a draft of this report. (See appendix VI and appendix VII, respectively.) In general, IRS agreed with our recommendation that a coordinated evaluation of the section 1203 process is desirable, and TIGTA neither agreed nor disagreed with our recommendation. However, both agencies raised a similar concern about the independence of each agency. Specifically, IRS said that TIGTA’s independent role makes it inappropriate for IRS to oversee TIGTA’s performance. TIGTA pointed to legislative challenges in implementing our recommendation because Restructuring Act amendments to the Inspector General Act of 1978 created TIGTA as an independent agency with autonomy from IRS. We recognize that IRS and TIGTA are independent agencies. As noted in our report, this independence is why IRS and TIGTA need to coordinate on the evaluation. In this sense, coordination does not mean that either agency evaluate, oversee, or direct the other agency. Rather, coordination means that IRS and TIGTA officials communicate on how each agency will develop goals, measures, and methods for collecting related data to better ensure that the entire section 1203 process is evaluated, using consistent and valid goals and measures. We do not believe that such coordination would jeopardize the independence of TIGTA from IRS, particularly when IRS and TIGTA already have been working together on managing and improving the section 1203 process, as discussed in TIGTA’s as well as IRS’s comments. We view our recommendation on developing a coordinated approach as part of that continued communication. We made minor wording changes to our recommendation in order to clarify the need for a coordinated evaluation approach. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time, we will send copies to the Secretary of the Treasury; the Acting Treasury Inspector General for Tax Administration; the Acting Commissioner of Internal Revenue; and the Director of Office of Management and Budget. We will make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions, please contact me or Tom Short on (202) 512- 9110. Key contributors to this report are acknowledged in appendix VIII. This appendix discusses the methodology we used to survey the Internal Revenue Service (IRS) employees on how section 1203 affected their interactions with taxpayers. We also discuss our methodology for a review of IRS case files to determine how long section 1203 cases were taking to process. To determine IRS frontline enforcement employees’ perceptions of how section 1203 has affected their interactions with taxpayers, we surveyed a random sample of IRS frontline enforcement employees in the Small Business/Self Employed Operating Division (SB/SE) who had direct contact with taxpayers and taxpayer representatives. We administered the survey between July and September 2002 to a stratified sample of IRS employees identified through IRS’s personnel database. The study population from which the sample was drawn consisted of 10,186 SB/SE frontline enforcement employees nationwide as of June 2002. To ensure that the study population only included frontline enforcement employees who had regular contact with taxpayers and taxpayer representatives, IRS managers familiar with the positions reviewed a list of titles for all positions in the GS-512 job series (revenue agents), GS-1169 job series (revenue officers), GS-526 job series (tax compliance officers), an GS-501 and GS-598 job series (tax auditors), and identified position titles in these 5 series where the incumbent would have regular contact with taxpayers and taxpayer representatives. The sample design for this survey is a single-stage stratified sample of IRS frontline enforcement employees in SB/SE. We drew a sample of 500 employees composed of 4 strata—revenue agents, revenue officers, tax compliance officers, and tax auditors. After we administered the survey, we adjusted the original survey and sample population size because 45 respondents indicated that they did not have contact with taxpayers and taxpayer representatives. These respondents were considered “ineligible” to participate in our survey and were subsequently excluded. We adjusted the final sample size to 455. We received 350 completed responses to our survey—a response rate of 77 percent. The remaining 105 cases were considered to be nonrespondents. All estimates produced in this report are for a study population defined as IRS’s SB/SE frontline enforcement employees who have contact with taxpayers and taxpayer representatives. We designed our sample to produce precise estimates of this population on a nationwide basis. As a result, we did not perform any analyses by stratum. Further, we created the estimates by weighting the survey responses to account for the sampling rate in each stratum. The weights reflect both the initial sampling rate and the response rate for each stratum. We randomly selected the sample used for this study based on a probability procedure. As a result, our sample is only one of a large number of samples that we might have drawn from the total population of SB/SE frontline enforcement employees. If different samples had been taken from the same population, it is possible that the results would have been different. To recognize the possibility that other samples may have yielded other results, we express our confidence in the precision of our particular sample’s results as a 95-percent confidence interval. For all the percentages presented in this report, unless otherwise noted, we are 95-percent confident that the results we obtained are within plus or minus 10 or fewer percentage points of what we would have obtained if we had surveyed the entire study population. For example, our survey estimates that 58 percent of the respondents indicated that section 1203 had no effect on their likelihood of requesting documents from a taxpayer. The 95-percent confidence interval for this estimate would be between 48 percent and 68 percent. We calculated the confidence intervals for our study results using methods that are appropriate for a stratified probability sample. In addition to the reported sampling errors, the practical difficulties of conducting any survey may introduce other types of errors, commonly referred to as nonsampling errors. For example, questions may be misinterpreted, the respondents’ answers may differ from those who did not respond, or errors could be made in keying the questionnaire responses into a data file. We took several steps to reduce such errors. We pretested the survey questions with employees from SB/SE who were part of the survey’s target population. After the survey administration, we examined the response rate for each of the 4 strata to determine whether any of the strata were underrepresented. The response rates for the revenue agent, revenue officer, tax compliance officer, and tax auditor strata were 89 percent, 87 percent, 78 percent, and 44 percent, respectively. We did not assess the impact of the nonrespondents on our results. To the extent that the nonrespondents had different views than the respondents, then our findings would be biased. The response rates for the revenue agent, revenue officer, and tax compliance officer strata are fairly high and give us a high degree of confidence that our findings for these groups are likely to be representative of the fuller populations. The 44 percent response rate for the tax auditor strata raises the possibility that the results for this group may have been different if more employees had chosen to complete the survey. To ensure the integrity of the survey data, we performed a quality control check on the surveys that were keyed into an automated data file. We found no keying errors. We identified areas to cover in the survey based on our congressional request and initial interviews with IRS and National Treasury Employees Union officials. We pretested the survey to IRS revenue agents, revenue officers, and tax compliance officers at three IRS field offices (at the time of the pretests, tax auditors were unavailable). Two of the offices were located in suburban Maryland and another was located in Washington, D.C. In doing the pretest, we evaluated the appropriateness of the survey questions and the various formats we planned to use in administering the survey. Based on the pretests, we made necessary changes to the survey prior to its nationwide implementation. We administered the survey in three ways: mail, Internet, and as a portable document format (pdf) attachment sent out via E-mail. The respondents could submit their completed surveys through regular mail, fax, or the Internet. In addition to the survey itself, each survey package included two letters encouraging employees to participate in the survey administration. One letter was signed by the IRS Commissioner of the Small Business/Self Employed Division and the other was signed by GAO’s Managing Director of the Tax Administration and Justice team. We conducted at least two follow up calls to each nonrespondent in order to encourage a high response rate. A copy of the survey instrument is in appendix III. Some of the survey questions were open-ended, allowing respondents an opportunity to provide thoughts and opinions in their own words. Of the 350 employees that responded to our survey, 208 provided written responses to the open-ended questions. In order to categorize and summarize these responses, we performed a systematic content analysis of the open-ended responses. Two GAO analysts reviewed the responses and independently proposed categories. They met and reconciled these; each comment was then placed into one or more of the resulting categories, and agreement regarding each placement was reached between at least two analysts. All initial disagreements regarding placement into categories were discussed and reconciled. The numbers of responses in each content category were then summarized and tallied. To contribute to our understanding of IRS’s processing of section 1203 cases and to determine the amount of time it takes to process the cases, we reviewed 92 of the 100 most recently closed cases that were recorded in IRS’s ALERTS database as of August 30, 2002. We developed a data collection instrument to record the type of allegation as well as various dates associated with key stages in the processing of the case. These key stages were identified as part of our review of the section 1203 process and confirmed through discussions with IRS officials familiar with the processing of these cases. Of the 100 cases that were identified in IRS’s database as being the most recently closed, we determined that 92 were available for review. For the 8 cases that were not available, IRS identified 3 as being duplicative, and we were advised by IRS not to include them in our review. In addition, according to IRS, 5 other cases were not available for review because the employee left IRS before TIGTA finished the investigation. (These cases were recorded as “not adjudicated.”) We performed a limited quality control check of the data recorded on 12 percent of the 92 cases by randomly selecting the cases. In addition, for 19 of the 92 cases, missing data prevented us from computing case processing times. As a result, processing times could only be calculated for 73 of the 92 cases included in this review. Table 3 provides a breakdown of the number of cases opened before, on, or after March 1, 2002—the date that the new section 1203 process was implemented. All cases were closed after March 1, 2002. The case processing times were calculated based on the dates that the case was opened by either TIGTA or IRS and closed by IRS. For the closing date, we used the date that the employee was issued a letter informing them of the outcome of his or her case. If there was no such letter, we used other documentation contained in the file that indicated the date that the case had been closed. In 5 of the cases, the employee had resigned or retired and the case file did not include a letter or other documentation to indicate the case had been closed. For these cases, we used the employees’ resignation or retirement date. Our work was conducted in accordance with generally accepted government auditing standards. Tables 4, 5, and 6 summarize information on section 1203 allegations for the period July 1998 through 2002. Table 4 provides information on substantiated section 1203 allegations by disposition and table 5 provides information on employee firings by type of misconduct and employee GS level. Table 6 provides a breakdown of results for the 3,512 allegations that were investigated, including allegations that were substantiated as a section 1203 violation, allegations that were substantiated for nonsection 1203 misconduct, and allegations that were not substantiated. Some of the survey questions were open-ended, allowing respondents to provide thoughts and opinions in their own words. In order to categorize and summarize these responses, we performed a systematic content analysis of the open-ended responses. Two GAO analysts reviewed the responses and independently proposed categories. They met and reconciled these; each comment was then placed into one or more of the resulting categories, and agreement regarding each placement was reached between at least two analysts. All initial disagreements regarding placement into categories were discussed and reconciled. As shown in figure 7, the number of responses in each content category was then summarized and tallied. The following description of section 1203 case processing applies to all allegations, except those related to compliance with federal tax laws and employee and taxpayer civil rights, which are processed separately. Complaints involving allegations of section 1203 misconduct are subject to a 3-stage process, including: (1) reporting and investigative determination, (2) fact-finding, and (3) adjudication. Figure 8 provides an illustration of the various stages of the processing of a section 1203 case. Any taxpayer, taxpayer representative, or IRS employee can file a complaint with IRS or TIGTA alleging employee misconduct under section 1203. IRS managers have been instructed to forward all allegations to TIGTA, which has primary responsibility for receiving and investigating complaints involving allegations of section 1203 misconduct. Once it receives the complaint, TIGTA is to enter information on the allegation into its information tracking system for managing and reporting purposes. After entering the information into its information system, TIGTA is to make an initial determination about whether the allegation should be investigated as a potential act of employee misconduct. If TIGTA finds sufficient information indicating a section 1203 violation may have occurred, TIGTA is to investigate the allegation. Similarly, TIGTA may find sufficient grounds to conduct an investigation for misconduct unrelated to section 1203. In either case, the results of the TIGTA investigation are provided to IRS as a formal Report of Investigation. TIGTA may also determine that the complaint does not contain specific enough information, or that it does not have the necessary expertise, to be able to make a determination on the complaint’s investigative merit. In these instances, TIGTA is to refer the complaint to the Commissioner’s Complaint Processing and Analysis Group (CCPAG) to determine whether there is a basis for an investigation. A case development team within CCPAG is to receive the allegation and enter information on the allegation into its information tracking system. The role of the case development team is to gather the relevant facts related to the allegation to determine whether the essential elements of a section 1203 violation may be present. Upon its evaluation of the allegation, CCPAG may conclude that the complaint is frivolous (e.g., a taxpayer alleges misconduct because the employee did not agree with the taxpayer that the tax laws are unconstitutional). In these instances, CCPAG is to forward the allegation to IRS’s Frivolous Return Program at the Ogden Service Center. After gathering the relevant information—for allegations not considered frivolous—CCPAG is to forward the allegation to the Board of Employee Professional Responsibility (BEPR) for its review. BEPR includes the Director, CCPAG, and representatives from the Small Business and Self Employed Division. IRS’s Strategic Human Resources and Agency-Wide Shared Services employee relations specialists and Office of Chief Counsel General Legal Services may serve as advisors to BEPR. TIGTA also serves in an advisory role on BEPR. IRS’s Senior Counselor to the IRS Commissioner participates in BEPR’s review of allegations involving IRS executives, GS-15’s and senior manager pay band employees. BEPR’s review may result in several outcomes. Specifically, BEPR may concur with the case development team’s finding that the allegation has no merit. In this situation, no investigation is conducted and the Director CCPAG is to issue a letter to the employee and his/her manager advising that there will be no investigation. If BEPR concurs with the case development team’s findings that no misconduct occurred, the Director of CCPAG is to issue a clearance letter to the employee and his/her manager. The case is then closed. If BEPR concurs with the case development team’s findings that other misconduct may have occurred, BEPR is to recommend a referral to TIGTA or IRS management for investgation, and regular disciplinary procedures are to apply. If BEPR agrees with the case development team’s findings that section 1203 misconduct may have occurred, BEPR is to recommend a referral to TIGTA for investigation. Once TIGTA or BEPR determines an allegation to have investigative merit as a possible section 1203 violation, TIGTA is to perform the investigation. Specifically, TIGTA may review records, interview witnesses, and consult technical experts as necessary to develop information relevant to the alleged violation. In some cases, the possible section 1203 misconduct may also be a potential violation of criminal law. In these cases, TIGTA is to refer its findings to a local U.S. Attorney Office for consideration of criminal prosecution. After the investigation is completed, and a referral is made to a U.S. Attorney, if appropriate, TIGTA is to provide a Report of Investigation to CCPAG. All TIGTA Reports of Investigation on allegations of section 1203 violations are first to be reviewed by CCPAG to determine whether the evidence can support the allegation for a section 1203 violation. If CCPAG determines that the evidence does not support a section 1203 violation or other misconduct unrelated to section 1203, the Director of CCPAG is to issue a clearance letter to the employee and his/her manager. If CCPAG determines that the evidence presented supports a section 1203 violation, it is to forward the Report of Investigation to the “proposing official”—a management official generally two levels of supervision above the subject of the allegation—for further action. Acting with the advice of an employee relations specialist, the proposing official is to determine whether misconduct has been substantiated by a preponderance of the evidence. If the proposing official determines that no misconduct occurred, the official is to issue a clearance letter to the employee. If this official determines that the evidence supports misconduct unrelated to section 1203, IRS’s regular disciplinary procedures are to apply. If this official determines that the specific elements of a section 1203 violation appear to be established by a preponderance of the evidence, he or she is to issue a letter to the employee proposing removal from the federal service. The employee has the right to respond to this proposal letter and to review any information relied upon by the proposing official. The case is to be submitted to the deciding official, generally an executive at least three levels of supervision above the employee. The deciding official is to review the entire case file, including the employee’s response, to determine whether the charge has been proved. If the deciding official determines that no misconduct occurred, the official is to issue a clearance letter to the employee. If this official determines that the evidence supports misconduct unrelated to section 1203, IRS’s regular disciplinary procedures are to apply. If the deciding official determines that a section 1203 violation is established by a preponderance of the evidence, the employee is to be removed from the federal service, unless the Commissioner of Internal Revenue decides that another penalty is to be imposed. The Commissioner of Internal Revenue has established a Section 1203 Review Board (Board) to consider all cases in which a deciding official finds that a section 1203 violation has occurred. Comprised of various IRS executives from different IRS units, the board must review the allegation to determine whether a penalty less than firing the employee is appropriate. If the Board does not find mitigation to be appropriate, the case is not submitted to the IRS Commissioner. The case is then returned to the deciding official who is to impose the statutory penalty of termination of employment. If the Board recommends mitigation, the Commissioner reviews the recommendation. If the Commissioner mitigates the penalty, other disciplinary actions, such as written counseling, admonishment, reprimand, or suspension, may be applied. The Commissioner’s decision on the level of discipline to be imposed is not subject to review outside IRS. After the Commissioner’s decision, the employee may appeal the finding that a violation occurred. In addition to the persons named above, the following persons made key contributions to this report: Kevin Dooley, Evan Gilman, Patty Hsieh, Shirley Jones, Stuart Kaufman, Anne Laffoon, MacDonald Phillips, Kristen Plungas, Brenda Rabinowitz, Anne Rhodes-Kline, Andrea Rogers, Wendy Turenne, and Chris Wetzel.
Section 1203 of the Internal Revenue Service (IRS) Restructuring and Reform Act of 1998 outlines conditions for firing IRS employees for any of 10 acts of misconduct covering taxpayer and employee rights and tax return filing requirements. Both IRS and the Treasury Inspector General for Tax Administration (TIGTA) have responsibilities related to section 1203. Because of concerns that section 1203 may have a chilling effect on IRS enforcement staff's productivity, GAO (1) determined the number of section 1203 allegations, (2) surveyed IRS employee perceptions about section 1203, and (3) identified problems IRS and TIGTA face in processing section 1203 cases and the extent to which they have addressed them. IRS data show that of the 3,970 section 1203 allegations IRS received from July 1998 through September 2002, IRS or TIGTA completed investigations on 3,512 allegations and substantiated 419 as violations, resulting in 71 employees being fired for section 1203 misconduct. Employee misconduct related to the two section 1203 provisions on whether employees filed their tax returns on time and accurately stated their tax liability (as opposed to the eight taxpayer and employee rights provisions) accounted for almost all of the violations and firings. Most of the IRS frontline enforcement employees who responded to GAO's survey said that they understood, but feared, section 1203. They also reported that, because of section 1203, their work takes longer and the likelihood of their taking an enforcement action, such as recommending a seizure, has decreased. However, employees also were more likely to say that other factors, such as IRS's reorganization, have had a greater impact on their ability to do their job than to say that section 1203 had a greater impact. IRS and TIGTA have taken steps intended to correct known problems in their processing of section 1203 employee misconduct cases--such as lengthy investigations and conflicts of interest during investigations--that may have negatively affected frontline employees' morale and productivity. However, the extent to which these steps have succeeded is unknown because IRS and TIGTA do not have a coordinated approach for evaluating how effectively they process section 1203 cases. Such an approach would include results-oriented goals, balanced performance measures to mark progress towards these goals, and means to collect performance data.
In 1969, the federal government officially adopted a measure to ascertain how many people across the country had incomes that were inadequate to meet expenses for basic needs. This poverty measure was based on the finding of the U.S. Department of Agriculture’s (USDA) 1955 Survey of Food Consumption that, on average, families of three or more persons spent one-third of their income on food. Poverty for a family of three was computed as three times the cost of the economy food plan, the least costly food plan designed by USDA. The poverty measure has been updated annually with a COL index to adjust for the change in prices nationwide, but the poverty measure has not been adjusted for differences in prices by geographic area. Thus, in 1993, a family of three with a cash income of less than $11,522 was considered to be living in poverty, regardless of place of residence. The concept of geographic COL adjustments of poverty measurement has been seen as problematic. A 1976 report to Congress on the measurement of poverty stated that “one of the most troublesome concepts of poverty measurement” was making adjustments for geographic differences in COL.It ultimately concluded that unresolved conceptual issues, such as the development of generally accepted market baskets of goods and services representative of the needs of the poor in various geographic areas, and data limitations precluded satisfactory geographic adjustments. More recently, in a 1992 report, we noted that there was insufficient data on which to base geographic adjustments to the measure of poverty. Some economists contend that adjusting the poverty measure for geographic differences in COL would be inappropriate, irrespective of the methodology used. They say that any such adjustment to reflect regional differences in market baskets would fail to recognize other regional differences that are relevant to a definition of poverty or the needs of the poor. For example, a COL index probably would not reflect differences among geographic areas in the level of support or assistance available to low-income families. To address our first two objectives, describing the function of a market basket and identifying potential methods for calculating a COL adjustment, we reviewed the relevant literature on measuring poverty and on geographic adjustment for COL and discussed these issues with specialists. These specialists included individuals associated with poverty measurement or COL data at the Bureau of Labor Statistics (BLS) and the Bureau of the Census, as well as private organizations and academic institutions. On the basis of these reviews and discussions, we identified 12 methodologies that might have potential for adjusting poverty measures to reflect geographic differences in COL. We consider these 12 methodologies to be illustrative for a wide range of potential approaches to determine geographic COL differences, but recognize that the list is not, and cannot be, exhaustive. (A more detailed account of our scope and methodology is contained in app. I.) To meet our third objective of obtaining expert opinion on the ability of the methodologies to adjust the poverty measure for geographic differences in COL, we identified experts and asked them to review the methodologies. From our list of more than 40 potential experts compiled during our literature review and initial discussions with specialists, we selected 15 experts to review the methodologies. (See app. II for a list of the selected experts.) We sent a questionnaire to these experts in which we described each methodology briefly. We asked the experts to review each of the 12 methodologies and to categorize the methodology’s potential for use in adjusting the poverty measurement for geographic difference in COL. Additionally, we asked them to discuss the strengths and weaknesses of each methodology. (See app. III for a copy of the information and questionnaire sent to each expert.) All 15 experts responded and we tabulated their ratings for each methodology to determine the ones the experts considered most and least promising. We also analyzed the written responses on strengths and weaknesses. We did our work in Washington, D.C., between September 1994 and January 1995 in accordance with generally accepted government auditing standards. Because we did not evaluate the policies or operations of any federal agency to develop the information presented in this report, we did not seek comments from any agency. Market baskets of goods and services form the basis for determining a COL index. Of the methodologies we examined that calculate a COL index, none uses a uniform national market basket in which the same quantities of identical goods and services are used in all locations. In fact, these methodologies all used market baskets that have different measures for at least one component—for example, transportation or housing. Several of the experts, in their comments on COL methodologies, said that market baskets for COL indexes should vary to reflect differences in local standards of living. Market baskets of goods and services provide the foundation for determining COL. The composition of the market baskets, such as the items included or the quantity of one item included in relation to other items, affects the dollar values that are determined to represent COL. Conceptually, market baskets for a COL index would accurately reflect differences in tastes, as well as needs, such that an individual would derive equal satisfaction from the various market baskets priced in different geographic locations. For example, food preferences in southeastern states for low-cost cereals, such as rice and corn, lowers COL in these areas, while climatic differences necessitates the expenditures for heating a home and warm clothing and increases the COL in northern states. Obtaining a consensus on what should go into a COL index’s market baskets and on how to update them would be difficult. The method generally preferred by the experts we contacted to determine the items to include in market baskets is to use expert judgment to specify the requirements for physical health and social well-being. But standards have not been identified for the majority of components of a COL index’s market baskets. Even if consensus were obtained on the specific items and their quantities to include in a COL index’s market baskets, another problem would be how to keep the market baskets up to date to reflect a constant standard of living. Of the methodologies we examined that calculate a COL index, all used market baskets that reflected regional differences in standards of needs and/or actual consumption patterns. Most notably, these methodologies varied in how they determined the housing and transportation components of the market baskets by adjusting for regional variation. We received numerous comments about market baskets for a COL index from the experts from whom we solicited assessments of the methodologies. Several experts noted the need to adjust the composition of the market baskets for differences in local standards of living among geographic areas. One expert commented that it is nearly impossible to obtain reliable evidence or credible expert judgments about the composition of market baskets to reflect specific local standards of living. This expert suggested that market baskets should be changed as acceptable standards are developed. The problem of keeping market baskets up to date was noted by other experts in their comments about the use of outdated data and concepts. For example, one expert specifically wanted a child care component to be included in the market baskets. We identified 12 generic methodologies that, in some part, could contribute to the development of a COL index that potentially could be used to adjust the poverty measurement for geographic differences. Four methodologies identified baseline data, or developed a market basket that could be the basis for constructing a COL index by geographic area. Six methodologies calculated a COL index from existing cost data or a previously defined market basket. Two methodologies developed an original market basket, collected data, and calculated a COL index with those data. Table 1 provides descriptions of the 12 methodologies. (Detailed descriptions of these methodologies are found in app. III.) A few of the methodologies are now used as COL indexes, but most have not been. For example, the norms, local indexes, and economic modeling methodologies are used in the private sector as COL indexes to make geographic COL adjustments for pay and relocation decisions. Until their discontinuance in 1981, estimates from the family budgets methodology had been used by policymakers to set income eligibility criteria for employment programs and to geographically adjust wages and salaries. Several of the methodologies that identify baseline data are used in ways other than to show differences in COL. For example, USDA uses the consumption data methodology to estimate expenditures on a child, which then are used to determine payments for the support of children in foster families. Many of the methodologies were developed by researchers to develop indexes to reflect COL differences, such as those categorized under the estimation models, interarea price index, and the consumer price index methodologies; but none of these are used to make geographic COL adjustments. (See app. III for detailed descriptions of how the data and indexes from the 12 methodologies are used.) We identified two additional methodologies but could not locate research that delineated how the methodologies could be implemented to develop a COL index. For example, administrative data from public assistance programs, such as the food stamp program, have been proposed as baseline data for developing a COL adjustment that would indicate the incidence of need within a geographic location. However, in our review of the relevant literature and discussions with specialists, we did not locate appropriate data that could be translated into an index to demonstrate geographic variation. Another approach to identify baseline data for a COL index would be to use information obtained from grocery stores’ universal product code scanners. As in the case of administrative program data, we could not locate information that indicated how the product code data could be used to develop a geographic index or ratio. During the process of obtaining experts’ ratings of promise for the 12 generic methodologies we identified, some experts indicated that we had not identified and presented all possible methodologies to make such a COL adjustment. A number of the experts suggested using a combination of several attributes from the methodologies that they reviewed. In addition, they identified four other methodologies that could be considered for doing geographic COL adjustments. One was a modification of the local indexes methodology, and another was a modeling technique to develop regional variables to obtain baseline data. The other two focused on ways to revise the current poverty measurement. One methodology included the most basic levels of shelter and food as the basis for measuring poverty. The other methodology, according to an expert, is what the National Academy of Sciences panel is expected to recommend in its forthcoming report. None of these methodologies was identified by more than one of the experts, however. We recognize that our list of 12 methodologies is not exhaustive, but consider it to provide a fair overview of the wide range of alternatives. The fact that the experts suggested further methodologies, and that no alternative was proposed by more than one expert, suggests that no agreement now exists among experts as to the best way to adjust the measurement of poverty for geographic differences in COL. This is discussed in the next section. The observation in a 1976 report to Congress that “although there may be geographic differences in the cost of living, there is no known way to make satisfactory geographic adjustments to the poverty cutoffs,” still seems valid. The experts who we asked to assess the methodologies differed about how best to make adjustments because of numerous data and conceptual problems that they identified. Overall, the experts’ ratings of each methodology’s promise for geographically adjusting COL were mixed, and our content analysis of the experts’ comments about each methodology’s strengths and weaknesses yielded diverse and sometimes conflicting perspectives. Overall, the experts’ ratings of methodologies were mixed. Although the majority of experts rated certain methodologies as showing little or no promise for adjusting the poverty measurement for geographic differences in COL, no clear consensus was observed overall in the ratings the experts gave regarding the methodologies’ promise for making adjustments. A majority of the experts regarded local indexes, polling, family budgets, consumption data, and the consumer price index methodologies as showing little or no promise for making adjustments. The comparable pay methodology was found by more than two-thirds of the experts to be not promising at all. (See table 2 for experts’ ratings of methodologies.) No methodology was rated by the majority of experts as showing great or very great promise to adjust the poverty measurement for geographic differences in COL. However, three methodologies—budgets, norms, and housing data—received a rating of at least moderate promise by a majority of the experts. The budgets methodology appeared to have the most promise, but less than half of the experts rated it as having great or very great promise. Our content analysis of the experts’ comments on each methodology’s strengths and weaknesses showed that the experts shared few common views on any specific methodology. When three or more experts did express a similar comment, it most often concerned a weakness rather than a strength of the methodology being rated. Some experts identified an attribute but expressed different perspectives as to whether it constituted a strength or weakness. Examples of mixed responses included one expert indicating that a strength of a particular methodology was its adaptability for use by government, while another expert characterized the same methodology as not being adaptable for use by government. In some instances, experts agreed about a methodology’s attribute—e.g., its emphasis on children—but differed as to whether the presence of this attribute should be viewed as a strength or weakness. (See figure 1 for strengths and weaknesses of the methodologies.) Our content analysis of the experts’ comments on the strengths and weaknesses of the three methodologies that received a rating of at least moderate promise by the majority of experts illustrates both the diverse and occasionally contradictory comments of the experts. The strengths of the budgets methodology lie in its representation of low-income families and its use of health and social well-being standards in the determination of the market basket. However, its eclectic approach of using these standards from various sources, which makes it difficult to explain to laypersons, was viewed as a weakness. Another weakness of the budgets methodology cited by the experts is that it fails to make adjustments for regional differences in transportation and some of the other market basket components. The experts who commented about its use of expenditure data were evenly split between those who viewed this as a strength and those who said it was a weakness. This methodology was viewed as capturing both contemporary and outdated concepts of consumption needs. For example, one expert cited the use of current standards as a strength, whereas other experts cited the use of 1981-based data to determine the importance given to items in the market basket as a weakness. The norms methodology was generally rated as promising because the COL index was frequently updated. The experts, however, differed in their comments about the methodology. For example, more than one-half of the experts said that the lowest income level for which the index was provided was well above poverty and was therefore unrepresentative of low-income families. Conversely, one expert, noting the degree of variation in income levels provided in the index, described it as “more relevant to the poor than other available sources.” Mixed responses of both strengths and weaknesses were indicated for the (1) appropriateness of the items in the market basket, (2) degree of geographic variation shown in the index, (3) ability of the methodology to be adapted and implemented by the government, and (4) cost associated with such implementation. The housing data methodology was regarded as strong in its focus on what the experts considered the major source of variation in COL. The fact that housing was the only cost measured was also cited as this methodology’s major weakness. As shown in table 3, the experts had mixed views about the representation in the baseline data of families living in poverty. The experts also lacked agreement on whether the housing concepts were appropriate. For example, one expert said the methodology had the “merit of focusing on rents for a specified type of apartment,” while another said that “decent, safe, and sanitary” qualities of housing should be controlled in the measure to prevent downward bias in low-income areas. A content analysis of the experts’ comments revealed that the local indexes methodology had many weaknesses resulting from its price data collection methods, which involve volunteers from chambers of commerce collecting and averaging prices that are representative of purchases of middle-management households in their local areas. This methodology was viewed to be an unsuitable representation of the consumption needs of the poor. Another weakness of the local indexes methodology was its exclusion of nonmetropolitan and rural areas. The polling methodology was regarded by several experts as a means to validate the measurement of poverty, rather than as an approach to make geographic COL adjustments. These experts said that this methodology provided insight into the relationship between an absolute measure of poverty, such as the current official measure, and a measure that is relative—that is, a measure that changes with growth in the economy or according to society’s perception of an adequate level of income. According to the experts’ comments, the main weakness of polling was in the quality of the data obtained through a public opinion survey. It was thought that the respondents would be biased in providing their estimates. For example, one expert wrote: “If respondents knew the survey results would be used to adjust poverty thresholds with implications for program expenditures and income taxes, then some may intentionally deflate or inflate their response, in their own self-interest.” The experts had mixed views about the costs associated with this method; some experts said it would be cost effective, while others said it would be costly. According to the experts’ comments, the main weakness of the comparable pay methodology was its reliance on employers’ labor costs. Many experts said that such a measure included influences other than COL and that as a consequence it was inappropriate and an unsuitable substitute for COL, especially as a representation of the needs of the poor. For example, one expert said, “Geographic variations in quality of life affect the relationship between wages/salaries and living costs. Use of employer costs as a measure of living costs would introduce significant regional bias.” Many weaknesses, as well as several mixed responses, were noted for the remaining three methodologies—consumption data, family budgets, and consumer price index. The concept of adjusting the measurement of poverty for geographic differences in COL has been seen as problematic, and remains so. We asked recognized experts to review 12 methodologies that illustrate the range of alternative approaches to adjust poverty measurement for geographic COL differences, and there was no consensus among these experts that any one methodology was the most promising for making such an adjustment. The fact that several of these experts suggested additional methodologies, but that no additional methodology was suggested by more than one of the experts, suggests to us that a consensus on any one approach does not exist. Where there does appear to be agreement, however, is that several of the methodologies offer little or no promise of appropriately adjusting the measurement of poverty for geographic COL differences. Further, obtaining a consensus on what items should go into a COL index’s market baskets to reflect regional differences in consumption would be difficult. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 20 days after its issue date. At that time, we will send copies of the report to the Secretary of Commerce, the Secretary of Labor, the Director of the Office of Management and Budget, and other interested parties. We will also make copies available to others on request. If you have any questions concerning this report, please call me on (202) 512-8676. Major contributors to the report are listed in appendix IV. To address the first two objectives of this job—describing the function of a market basket in determining a COL index, and identifying potential methods for calculating a COL adjustment—we first reviewed the relevant literature and held discussions with specialists in the field. These specialists included individuals associated with poverty measurement or COL data at the Bureau of Labor Statistics (BLS) and the Bureau of the Census, as well as private organizations and academic institutions. We also included individuals who did not support geographic adjustment of the poverty measurement, as well as those who have proposed methodologies to achieve this objective. On the basis of our literature review and preliminary discussions with specialists, we described the function of a market basket and identified an initial set of methodologies that might have potential for adjusting poverty measurement for geographic differences in the COL. We grouped similar methodologies into 12 categories and gave a generic name to each. We excluded potential methodologies if they did not identify existing data that could be turned into a geographically adjusted index. Two methods, one based on use of data from administrative records and one relying on data scanning of uniform product codes, were eliminated because they did not meet this criterion. To meet our third objective of obtaining expert opinion on the ability of these methodologies to adjust the poverty measure for geographic differences in COL, we selected a panel of 15 experts and surveyed them using a data collection instrument that contained brief descriptions of each of the 12 generic methodologies we identified. We asked the panel to review each description and rate each methodology in terms of its promise for use in adjusting the poverty measurement for geographic differences in COL. The description of each methodology identified data sources, discussed the cost and time needed to develop an index with the methodology, and provided an example of how the calculations would be made and the index could be used. We asked the developer or someone very familiar with each methodology to review our brief description to ensure that it accurately conveyed the essence of the methodology. We asked the selected experts to rate each methodology on a five-point scale that ranged from “not promising at all” to “shows very great promise,” and then briefly discuss the strengths and weaknesses of the methodology. The experts were also asked to identify any additional methodology we may have overlooked and provide their views on the major challenges and costs associated with developing COL data that could be used to geographically adjust the poverty measure. We randomly chose 15 individuals to serve as experts from a candidate list of more than 40 names. To obtain a diverse candidate pool reflective of the different interests involved, we asked for nominations of potential experts from those specialists in the field and representatives of major statistical agencies that we met with during our initial discussions and literature review. To avoid potential conflicts of interest, we excluded individuals from the list who are currently serving on the National Academy of Sciences’ Panel on Poverty and Family Assistance or who are political appointees. We recognize that the responses we received reflect only the views of the experts included. Several of the experts initially selected were unable to participate. We replaced these individuals with alternates from the remaining pool of candidates. (See app. II for a list of the participating experts.) Before contacting our initial selections, we asked congressional staff and officials from Census, BLS, and the Office of Management and Budget to review the list for balance and to identify any additional experts they believed should be included. No additions were suggested. The selected experts received a package containing a letter of introduction, an instruction sheet, descriptions of all the methodologies, and response sheets (see app. III). The package was sent on November 14, 1994. Responses were received from all 15 experts by January 6, 1995. We tabulated the ratings for each methodology to obtain an overall assessment of the experts’ opinions of how promising each methodology was for use in adjusting the poverty threshold for geographic differences in COL. We also did a content analysis of the experts’ responses to the strengths and weaknesses question for each methodology. From an initial reading of the responses, we developed a list of cited strengths and weaknesses. We used this list to code the responses of all experts for each methodology. The coding of the responses was verified by a second coder, and a third person checked coding reliability. As a method of focusing our analysis on the recurring comments made by the experts in their discussions of each methodology’s strengths and weaknesses, we adopted a decision rule to report only those comments made by three or more experts for a particular methodology’s attribute. Experts’ comments on market baskets were identified separately and were used in our description of the function of the market basket. Additionally, we used experts’ general comments on major challenges and costs associated with geographically adjusting poverty measures to illustrate our results. Mark C. Berger University of Kentucky Dixie Blackley Le Moyne College Tom Carlin Department of Agriculture Lawrence Gibson Eric Marder Associates, Inc. This appendix contains copies of the cover letter, instruction sheet, answer sheets, and brief descriptions of the 12 methodologies that we sent to the 15 experts we selected to review the methodologies. Federal Aid: Revising Poverty Statistics Affects Fairness of Allocation Formulas (GAO/HEHS-94-165, May 20, 1994). Poverty Trends, 1980-88: Changes in Family Composition and Income Sources Among the Poor (GAO/PEMD-92-34, Sept. 10, 1992). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the statistical data requirements for constructing a cost-of-living (COL) index that could be used, at the federal level, to adjust for geographic differences in living costs. GAO found that: (1) the current measurement to determine poverty levels does not account for geographic COL differences; (2) market baskets, a measure used to evaluate relative economic standing, would provide the foundation for any measure of living costs; (3) obtaining a consensus on what should go into a market basket for a COL index would be difficult; (4) there are 12 methodologies that could be used to contribute to an index to adjust the poverty measurement to reflect geographic differences; (5) the methodologies include budgeting for representative market baskets, measuring consumer spending norms, examining housing data, family budgets, or consumption data, developing various geographically specific price indexes, polling, calculating the relative amounts of time worked for each of the components of compensation, and estimating or modelling; and (6) experts' opinions about the methodologies' strengths and weaknesses were diverse and sometimes conflicting.
The Resource Conservation and Recovery Act of 1976 (RCRA) authorizes EPA to set minimum operating requirements for hazardous waste facilities to protect the public and the environment. The Occupational Safety and Health Act of 1970 authorizes the Department of Labor, through OSHA, to establish standards to protect workers’ health and safety. Under both statutes, states can be authorized to inspect facilities, take enforcement action against facility owners/operators, and assess penalties at facilities that fail to meet states’ federally approved RCRA or OSHA programs. EPA has authorized 46 states to implement their own RCRA programs, and OSHA has authorized 23 states to implement their own OSHA programs. The federal government is responsible for implementing RCRA and OSHA programs in the remaining states. According to EPA’s general operating requirements for hazardous waste facilities, workers must be trained to know the environmental requirements that apply at their facility, and facilities must have contingency plans and emergency procedures for accidents. To ensure facilities’ compliance with regulatory or permit-related requirements, EPA recommends that its regions or the states inspect facilities annually. Every other year, EPA recommends an in-depth inspection lasting several days, rather than the annual 1-day walk-through. Facilities that accept Superfund waste must be inspected within the 6-month period prior to receiving such waste. During inspections, EPA and the states complete checklists of items to review while observing facilities’ operations and reviewing facilities’ records and files. OSHA’s health and safety regulations are intended to ensure that employees can recognize and avoid unsafe conditions and are instructed in the handling of any special equipment, among other things. At hazardous waste facilities, employees must receive special hazardous material training. To ensure compliance with OSHA’s regulatory requirements, federal or state OSHA inspectors conduct either “programmed” (planned) inspections or “unprogrammed” inspections to follow up on complaints, referrals, or accidents. The scheduling of federal programmed inspections is based on industries’ history of health or safety violations. Facilities within the types of industries that have a history of many violations receive programmed inspections for health or safety by OSHA’s field offices. OSHA also reserves some resources to conduct programmed health inspections at randomly selected facilities having a history of few health violations. States may use different methods for scheduling programmed inspections. OSHA and the states conduct unprogrammed inspections in response to complaints, referrals, and accidents resulting in catastrophes or fatalities. According to an OSHA Office of Policy official, during fiscal year 1993, about half of the inspections were programmed, or targeted, as a result of particular industries’ violations. The remainder of the inspections were unprogrammed. OSHA’s inspections rely on inspectors’ observations as well as interviews with employees and reviews of records. As of November 1994, 162 incinerators were operating in the United States. Of these, 141 had their final permits, which impose facility-specific operating requirements. The remaining 21 were considered in interim status. When an existing hazardous waste facility first becomes subject to RCRA’s requirements for permits, it generally assumes “interim status” until its operator completes the permit application process. A facility under interim status is allowed to continue operating under general operating requirements, pending EPA’s or the state’s approval of the facility’s final permits. In May 1990, after local citizens and workers made complaints or allegations about waste handling practices at an incinerator in North Carolina, EPA requested that the Department of Health and Human Services’ Agency for Toxic Substances and Disease Registry (ATSDR) evaluate health threats posed by the incinerator. Although routine RCRA inspections conducted while the facility operated from 1976 through 1989 had not detected or confirmed these allegations, ATSDR concluded that waste handling operations at the facility had posed a significant health threat to employees. In September 1990, at EPA’s request, OSHA and EPA formed a task force to evaluate compliance with health and safety requirements at 29 hazardous waste incinerators, including all commercial facilities with final permits, all facilities under interim status, and all incinerators burning Superfund waste. The task force’s May 1991 report summarized the results of these joint inspections. In total, EPA and OSHA detected 395 violations. The task force’s report made five recommendations to EPA and four recommendations to OSHA to improve the coverage of inspections and educate compliance officials and industry, among other things. Of the task force’s recommendations, EPA and/or OSHA have fully implemented three. However, the agencies have not fully implemented four other recommendations. EPA’s and OSHA’s follow-up on violations. On the basis of the 75 RCRA violations detected, EPA and the states initiated enforcement actions and collected over $2 million in penalties. The violations detected include the facilities’ failure to provide adequate environmental training and inability to respond fully to emergencies. OSHA and the states also completed enforcement actions for the 320 OSHA violations and collected $44,000 in penalties. The violations detected include the facilities’ failure to provide adequate hazardous material training, conduct medical surveillance, or update contingency plans for emergencies. EPA’s and OSHA’s education of industry. EPA and OSHA conducted outreach to the combustion industry to ensure the industry’s compliance with their regulations. EPA and OSHA officials said they jointly wrote to combustion industry representatives to emphasize the importance of compliance with health and safety requirements. EPA also met with combustion industry representatives to tell them that the task force had found significant health and safety violations that needed to be addressed. EPA’s education of compliance officials. Following the recommendation that EPA improve its regional and state officials’ knowledge about incineration, EPA developed a training program for and designated combustion experts for conducting inspections in each of the agency’s 10 regional offices. These combustion experts meet regularly to discuss issues concerning hazardous waste incinerators and other combustion facilities. EPA’s inspection coverage. Although the task force’s report recommended that EPA adopt some of the task force’s inspection procedures so that EPA could better scrutinize industry’s compliance with the agency’s regulations, EPA did not fully implement the recommendation. In particular, the task force’s inspectors used a new checklist that expanded the checklist used during EPA’s routine inspections. This new checklist was designed to evaluate the effectiveness and not just the presence of employee training programs, contingency responses, and emergency plans. Furthermore, interviews of employees during the task force’s inspections assessed employees’ knowledge of environmental requirements and employees’ ability to carry out contingency plans and emergency procedures. But in general, during routine inspections, EPA or the states only review employers’ records to ensure that employers have a training program and that plans are on file. After the task force made its recommendations in December 1990, an EPA Assistant Administrator sent a memo to regional administrators asking that they distribute the task force’s inspection checklist and employee interview guide to their staff and the states. EPA’s Technical Assistance Branch Chief also orally instructed regional enforcement section chiefs to include items from the task force’s checklist in the regions’ routine inspections. In addition, EPA included the new checklist in the agency’s inspection training manual and training courses. However, some of the EPA regions and states did not adopt the task force’s checklist as suggested or directed because, according to regional compliance and enforcement officials, they were not aware of headquarters’ instructions. Furthermore, according to an EPA Technical Assistance Branch official, EPA headquarters did not follow up to ensure that inspection procedures were changed because EPA believed the changes would be made, since it included the checklist in the training manual and training courses. An EPA Technical Assistance Branch Chief said that even if regions and states had adopted the task force’s checklist and interview guide, it would be difficult for inspectors to duplicate the information obtained during the task force’s inspections because the inspections included both EPA’s and OSHA’s interviews and were very focused. However, according to a regional inspector, while time is a factor during inspections, interviews of employees could routinely be included in all inspections, routine or in-depth, or on a case-by-case basis. These interviews would help confirm industry’s compliance with EPA’s requirements and assess employees’ knowledge of required duties. As a result of our work, EPA’s Assistant Administrator for Enforcement and Compliance Assurance issued a memorandum, dated September 23, 1994, to Regional Administrators and other RCRA officials requiring them to adopt the task force’s inspection protocol, which includes using the revised checklist and employee guide, for workers’ safety and health in regional RCRA Compliance Evaluation inspections. In addition, the memorandum requires that regional inspectors refer these violations to regional OSHA officials. EPA’s research on the use of certain operating equipment and review of permits. EPA did not fully implement the recommendation that it conduct research on the cause for and impact of using certain operating equipment—automatic waste feed cutoffs and emergency safety vents, or vent stacks—and that it reopen permits, as necessary, to address the use of this equipment. During the task force’s inspections, EPA observed the frequent use of automatic waste feed cutoffs at about half of the 29 facilities and the frequent use of vent stacks at 9 of these facilities. Automatic waste feed cutoffs prevent waste from entering the combustion chamber of an incinerator when operating conditions fluctuate outside certain parameters, such as those for temperature. Vent stacks protect workers and equipment by releasing gases when equipment malfunctions. While both are considered safety devices, EPA considers their frequent use an indication of poor operating practices. In particular, the frequent use of waste feed cutoffs (1) may be a sign of unsteady operation and (2) may cause the residue to be treated less efficiently. Furthermore, gases released through vent stacks contain more hazardous particles than gases routed through the air pollution control devices. In response to the recommendation, EPA conducted experiments at two of its research incinerators. However, because of funding and equipment limitations, EPA’s initial tests did not fully answer questions about the effects of using waste feed cutoffs and vent stacks. EPA believed that states had taken steps to place controls on the use of these devices at the facilities that the task force had found to have the greatest number of cutoffs and releases. Because these tests were inconclusive and because EPA believed that states had taken steps to control frequent usage, EPA did not review or revise other permits to place controls on the use of this equipment at the other facilities that the task force had found to have an excessive number of cutoffs and releases. For example, at one facility that the task force found to have an excessive number of waste feed cutoffs, no action has been taken. State officials told us they wanted to place controls on the use of waste feed cutoffs and vent stacks at this facility, but because EPA’s regulations do not specifically address controls over this equipment, the use of any such controls would have to be negotiated when the permit was renewed. In commenting on our report, EPA stated that its concern is not with the use of automatic waste feed cutoffs per se, but with facilities that may frequently use automatic waste feed cutoffs. This is especially true when facilities exceed their permits’ operating limits if the waste feed cutoffs occur while waste remains in the system. EPA drafted a policy memorandum in 1992 to provide guidance to permit writers so they could place proper controls on the use of waste feed cutoffs and vent stacks in new permits and permits for facilities requesting modifications. EPA did not complete the draft memorandum because of other priorities, such as the agency’s need to work with the regions and states on implementing the newly issued boiler and industrial furnace regulations and focusing on site-specific incinerator issues. According to an official in the Permits and State Programs Division, EPA did, however, revise its permit writers’ training to include guidance on controlling the use of waste feed cutoffs and vent stacks. However, according to a combustion expert and Alternative Technology Section Chief, a policy memorandum would further support regions’ and states’ efforts to place controls over the use of waste feed cutoffs and vent stacks. State officials expressed a desire for such guidance. By December 1996, EPA plans to revise its 1981 regulations for incinerators to, among other things, clarify that exceeding a permit’s operating parameters or bypassing the air pollution control device violates the permit regardless of whether an automatic waste feed cutoff occurs. In the interim, 21 incinerators currently are awaiting their final RCRA permits. In May 1993, EPA placed a high priority on issuing permits for existing combustion facilities that do not have final permits. While EPA does not anticipate that all of these facilities will be granted permits by December 1996, it hopes to make substantial progress. OSHA’s education of compliance officials (inspection expertise). OSHA has not implemented the task force’s recommendation that the agency improve its inspection expertise. According to an OSHA Office of Policy Official, a memorandum of understanding entered into with EPA’s Office of Enforcement in 1990 might have resulted in improved inspection expertise and knowledge of hazardous waste incinerators’ operations for OSHA. This memorandum provides a framework for exchanging information and technical and professional assistance, conducting joint EPA-OSHA inspections, referring violations to each agency, and coordinating compliance and enforcement information. According to an OSHA Office of Policy official, although the memorandum was implemented, it did not result in improved inspection expertise or increased knowledge. A Senior Enforcement Counsel with EPA told us that EPA’s Office of Enforcement did not have oversight responsibilities for inspection and enforcement activities at hazardous waste facilities. Furthermore, EPA’s Office of Enforcement did not provide information to EPA headquarters’ compliance staff who are responsible for directing EPA’s regional compliance activities at hazardous waste facilities, including inspection and enforcement—which are conducted primarily at EPA’s regional and state levels. Because EPA headquarters did not direct the regions to coordinate their inspections of combustion facilities with OSHA and the regions did not suggest that states coordinate their inspections of combustion facilities with OSHA, the memorandum was not fully carried out. However, in June 1994, EPA consolidated inspection and enforcement responsibilities in the agency’s new Office of Enforcement and Compliance Assurance. According to an EPA Senior Enforcement Counsel official and an OSHA Office of Policy official, the consolidation of the responsibility to develop policy and guidance for inspections and enforcement actions within the new office will aid in carrying out the purpose of the memorandum and therefore in meeting the intent of the task force’s recommendation. Furthermore, in September 1994, EPA’s new Office of Enforcement and Compliance Assurance directed regions to inform OSHA of any facilities found in violation of RCRA’s health and safety requirements, as required by the memorandum of understanding. In commenting on our report, OSHA stated that it has trained 245 federal and state compliance officers at its Training Institute to increase their knowledge of hazardous waste sites’ operations. We recognize that OSHA does have a training program that disseminates knowledge of hazardous waste site operations for its enforcement officials and that this training program has continually been improved. However, our discussions with officials in OSHA’s Training Institute and OSHA’s Directorate of Policy and Office of Field Programs reveal that OSHA has not made any changes to the training given to its enforcement officials as a result of the task force’s recommendations. OSHA’s inspection coverage. OSHA also has not implemented the recommendation that the agency improve the coverage of its inspections by specifically including hazardous waste incinerators on its lists of programmed inspections. The refuse systems industry, which includes commercial hazardous waste incinerators, had a priority ranking, in terms of relative risk when compared with other industries, of 122 out of 324 in fiscal year 1991, and 150 out of 372 in fiscal year 1992. Following the task force’s report, OSHA instructed that in fiscal years 1991 and 1992, any programmed inspections conducted at facilities included in the refuse systems industry be limited to two sectors of the industry—“Disposal and Collection of Acid Waste” facilities and “Incinerator Operations” facilities. However, even though incinerators were given a higher priority for being inspected within the refuse systems industry, the refuse systems industry was not ranked sufficiently high enough, with respect to relative risk, to result in any programmed inspections at hazardous waste incinerators. According to OSHA’s Director of Data Analysis, OSHA did not inspect incinerators under this initiative because few of OSHA’s federal or state offices have sufficient resources to conduct health inspections at industries that are not ranked in the top 100. Following fiscal year 1992, OSHA no longer restricted inspections of refuse systems industries to facilities that dispose of or collect acid waste or that incinerate. Furthermore, in fiscal year 1993, the refuse industry’s relative risk fell to 220. Since the task force made its inspections, EPA and/or OSHA, and states have inspected 22 facilities that have operating incinerators. However, the types of inspections conducted after the task force’s inspections differed in scope from the task force’s inspections, and EPA, OSHA, and the states have not detected as many or the same pattern of health or safety violations as did the task force. Since 1990, EPA and the states conducted 108 inspections at the 22 facilities and detected 630 violations. These inspections found a wider range and variety of violations than the task force found. However, fewer violations have been detected in the categories that the task force assessed, including personnel training, contingency plans, and emergency response. While EPA said that this may be due, in part, to improvements in industry’s training of its workers as a result of the task force’s inspections, as noted earlier, EPA’s inspections only determined whether training programs existed. On the other hand, the task force’s inspections focused on the effectiveness of training for the workers. Furthermore, EPA’s and the states’ subsequent inspections were broader in scope and looked at all aspects of the facilities’ operations. As a result, violations of a wider array of regulatory requirements were detected, including those for the facilities’ noncompliance with permits, the management of containers, and incinerator operation requirements. These subsequent inspections and enforcement actions resulted in an additional $4 million in collected penalties. According to EPA and state officials, all but one of the incineration facilities have returned to compliance following these inspections. (App. I contains additional information on the number and types of violations detected during the task force’s and subsequent inspections.) OSHA and the states have conducted few health or safety inspections since the task force’s inspections, and those that have been conducted were narrow in scope. OSHA and the states have not conducted any programmed health or safety inspections at the 22 operating incineration facilities since 1990 because the industries were ranked as a low priority, and they were not randomly selected for inspection. For example, in fiscal year 1993, OSHA’s relative risk and priority ranking for commercial incinerators was 220 out of the 381 industries ranked. According to OSHA’s Director of Data Analysis, it is not surprising that OSHA has not scheduled any programmed inspections at hazardous waste incinerators because of their relatively low risk and because of the low probability of their being randomly selected. An OSHA Office of Policy official said that OSHA prefers to target its resources at industries that OSHA views as more dangerous to workers’ health and safety, such as manufacturing and construction industries. OSHA has, however, responded to eight complaints or referrals at five incineration facilities and collected about $22,000 in penalties. According to our analysis of the violations that OSHA found after the task force’s inspections, none were violations detected by the task force at those five facilities. The violations have since been resolved. (App. II includes a comparison of health and safety violations detected during the task force’s and subsequent inspections.) In addition to those actions recommended by the task force’s report, EPA and OSHA have initiated other actions to protect health and safety at incineration facilities. EPA proposed a draft strategy for issuing permits to remaining incineration, boiler, and industrial furnace facilities under interim status and improving combustion regulations and policies. OSHA is planning to issue a regulation requiring hazardous waste facilities, including incinerators, to have accredited training programs for workers. However, OSHA has no means to ensure that all facilities submit programs and receive accreditation. Partially in response to public concerns about incinerators and other types of combustion facilities, in May 1993, EPA issued a draft strategy for ensuring the safe and reliable combustion of hazardous waste. As part of that strategy, EPA designated the issuance of new incinerators’ permits a low priority for 18 months so it could focus its resources on issuing permits for existing facilities under interim status, including the 21 discussed previously. In addition, the strategy calls for incorporating dioxin emission standards in new permits and incorporating more stringent controls over metals. EPA has directed regions to use the stricter operating standards as guidance for writing and issuing new permits if permit writers determine that these new standards are necessary to protect human health and the environment. EPA also targeted combustion facilities, including a total of 10 incinerators and other hazardous waste combustion facilities, for two separate enforcement initiatives in September 1993 and February 1994. These initiatives focused primarily on hazardous waste combustion operations and resulted in EPA-and state-assessed fines of over $9 million. As directed by the Superfund Amendments and Reauthorization Act of 1986, OSHA is developing new standards and procedures for accrediting training programs for workers at hazardous waste facilities, including incinerators. OSHA expects this requirement to become final in December 1994. OSHA intends that the proposed regulation will result in workers’ reduced exposure to hazardous substances and thus will help prevent fatalities and illnesses. Under the proposed regulation, all employees working on-site and exposed to hazardous substances and health or safety hazards will receive OSHA’s accredited training. However, OSHA has no method to ensure that (1) all hazardous waste facilities submit training programs for accreditation and (2) all facilities’ programs are accredited. OSHA and the states plan to rely on inspections to verify that facilities are complying with the requirement. However, since 1990, OSHA and the states have conducted few inspections at hazardous waste incineration facilities, and given the relatively low risk that the agency assigns to incinerators, OSHA and the states would only conduct inspections at incinerators if they were randomly selected or in response to complaints, referrals, or accidents. EPA could be of assistance to OSHA to ensure that facilities have accredited programs by, for example, (1) verifying, during inspections by EPA and the states, whether training programs have received accreditation from OSHA and, if not, informing OSHA and (2) providing OSHA with EPA’s hazardous waste facility identification data, which would give OSHA an inventory of such facilities that OSHA currently does not have. OSHA could use such information to track which facilities have not submitted training programs for accreditation. However, OSHA has not explored with EPA the ways in which EPA could assist OSHA. EPA and OSHA have generally followed up on the task force’s recommendations. However, EPA has not fully implemented two key recommendations that, in our view, could be undertaken relatively easily. In particular, some EPA regions and states have not adopted the revised checklist and employee interview guide as requested by EPA headquarters in December 1990, in part, because EPA did not follow up to ensure that regions and states did so. In response to our work, EPA recently issued another memorandum that specifically directs regions and states to adopt the task force’s inspection protocol, which includes the revised checklist and employee interview guide. If regions and states follow through and implement this requirement, inspectors will be better able to determine not only that employees have received the required training but also the effectiveness of that training. However, because EPA issued this memorandum only recently, it is too soon to know if the regions and states will follow the agency’s directive. Furthermore, although some states took action to improve the operations of facilities that made frequent use of automatic waste feed cutoffs and vent stacks, EPA and the states did not revise permits at other facilities that the task force also found were frequently using this equipment. However, in 1992, EPA drafted guidance for permit writers to clarify the use of these operating devices in new permits and permits for which modifications were being requested, but it never completed the guidance. While EPA plans to revise regulations for incinerators that will clarify when this operating equipment can be used, at the earliest these regulations will not be completed until the end of 1996. In the meantime, EPA hopes to make substantial progress in issuing RCRA permits for 21 facilities under interim status. Without guidance to include controls on the use of automatic waste feed cutoffs and vent stacks, some of these permits may not include these stricter operating requirements. OSHA plans to make one substantive improvement, as required by the 1986 Superfund amendments act, to improve workers’ health and safety by accrediting hazardous waste training programs. Under current plans, however, the agency will have no way of knowing whether this requirement is actually being met. On the other hand, by working with EPA, either through the memorandum of understanding or directly with EPA staff, OSHA could explore what assistance EPA could provide OSHA to determine compliance with its accreditation requirement. This assistance could include relying on EPA and states to identify, through RCRA inspections, facilities failing to have OSHA-accredited training programs and refer them to OSHA. To ensure that EPA regions and states comply with EPA’s directive to adopt the task force’s inspection protocol to assess the effectiveness of training for workers, contingency plans, and emergency preparedness, we recommend that the Administrator, EPA, follow up, after an appropriate interval, to ensure that federal and state inspectors include revised procedures in their inspections. To ensure that permit writers have the necessary guidance to place controls on automatic waste feed cutoffs and emergency vent stacks prior to EPA’s issuance of revised regulations for incinerators in 1996, we recommend that the Administrator, EPA, complete and issue the agency’s draft guidance relating to waste feed cutoffs and vent stacks. To ensure that all hazardous waste facilities’ training programs receive accreditation, we recommend that the Secretary of Labor direct the Administrator, OSHA, to work with EPA to develop a means to ensure that all hazardous waste facility employers submit their training programs to OSHA and receive required accreditation. EPA and OSHA provided us with written comments on a draft of this report. EPA noted that some EPA regions and some states did not adopt or include the task force’s inspection protocol, which includes the revised checklist and employee interview guide, in their routine inspections. EPA also concurred with our finding that EPA needs to provide guidance to permit writers on the use of automatic waste feed cutoffs and vent stacks. The agency plans to complete guidance and has included it in EPA’s fiscal year 1995 plans. EPA’s comments and our responses are included in appendix III. OSHA generally disagreed that it did not fully respond to the task force’s recommendations that it improve its coverage of inspections by including hazardous waste incinerators on its list of targeted inspections and that it improve the inspection expertise of its compliance officers. While we recognize that OSHA took some actions to carry out these recommendations, such actions neither resulted in any programmed inspections of hazardous waste incinerators, thus improving OSHA’s coverage of inspections, nor improved inspection expertise. As discussed earlier, the memorandum of understanding between OSHA and EPA was ineffective in improving the inspection expertise of OSHA’s inspection officers because no joint inspections were conducted at incinerators as a result of the memorandum. Also, while OSHA has made changes to its education curriculum, none resulted from the task force’s report. Furthermore, OSHA stated that its current plans to improve workers’ health and safety by accrediting hazardous waste training programs will be sufficient along with industry outreach to ensure that the quality of employers’ safety and health training programs will be enhanced. However, on the basis of our review of OSHA’s methods of selecting facilities for inspections and OSHA’s history of performing few inspections, we continue to believe that OSHA’s current procedures will not ensure the fulfillment of OSHA’s stated intent that all employees working on-site and exposed to hazardous substances will receive OSHA’s accredited training. OSHA’s working with EPA could provide an opportunity for that assurance. OSHA’s entire comments and our responses to them are provided in appendix IV. We conducted our review from October 1993 through December 1994 in accordance with generally accepted government auditing standards. Our scope and methodology for conducting this work are discussed in appendix V. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will make copies available to others on request. Please contact me on (202) 512-6111 if you or your staff have any questions. Major contributors to this report are listed in appendix VI. EPA = Environmental Protection Agency RCRA = Resource Conservation and Recovery Act Violations were found in several areas including groundwater monitoring, the condition of tanks, and compliance with former enforcement actions. The following are GAO’s comments on the Environmental Protection Agency’s (EPA) letter dated November 23, 1994. 1. We appreciate EPA’s efforts to follow up on the task force report’s recommendations and believe that the report accurately reflects actions taken by the agency, such as revising permit writers’ training to include improved approaches to control the use of emergency safety vents and automatic waste feed cutoffs, thus increasing permit writers’ consciousness of this issue. 2. We have revised the report to clarify that the report did not call for revising all existing incinerator permits but, rather, only those permits where revisions were viewed as necessary because of the high number of safety vents and automatic waste feed cutoffs. 3. We revised the report to reflect how EPA is addressing the use of automatic waste feed cutoffs in permits, namely, that EPA is placing controls over the use of waste feed cutoffs. 4. We revised the report to reflect this information. 5. We revised the report to include this information. 6. We continue to believe that EPA did not fully implement the recommended research because the recommendation was intended to result in a determination of why waste feed cutoffs and stack vents were used and their impact. We agree that EPA conducted limited tests, but we believe that these initial tests were not sufficient and that limited resources have not allowed the agency to conduct follow-up research to determine the cause and impact of using waste feed cutoffs and stack vents. 7. We revised the report to include this information. 8. The report recognizes EPA’s efforts to designate and train combustion experts in each region under the caption entitled EPA’s Education of Compliance Officials. 9. We revised the report to limit our discussion to EPA’s actions taken after the task force’s inspections. 10. We revised the report to include this information. 11. We revised the report to include this information. 12. We revised the report to include this information. 13. We revised the report to include this information. 14. We revised the report to reflect EPA’s concerns regarding operating conditions for using automatic waste feed cutoffs and stack vents. 15. We revised the report to clarify this information. 16. The report recognizes that the task force’s recommendation was that EPA reopen permits, as necessary, to address the use of automatic waste feed cutoffs and stack vents. 17. We have revised the report to show that EPA’s approach is not to impose numerical limits on using waste feed cutoffs or vent stacks but to write permit operating conditions so that the facility must comply with operating conditions as long as waste is present in the unit. 18. We revised the report to reflect EPA’s priorities in fiscal year 1992. 19. We revised the report to include this information. 20. We revised the report to clarify that the use of automatic waste feed cutoffs is not in itself a violation. 21. We revised the report to include this information. 22. We have revised the report to reflect the activities of both EPA’s Office of Enforcement and OSHA under the memorandum. We continue to believe that the memorandum was not as successful as intended on the basis of information stated in our report. 23. We revised the report to include this information. 24. We revised the report to include this information. 25. We revised the report to include this information. 26. We revised the report to include this information. 27. We revised the report to include this information. 28. We revised the report to include this information. 29. We revised the report to include this information. The following are GAO’s comments on the Department of Labor’s letter dated November 8, 1994. 1. We continue to believe that the Occupational Safety and Health Administration (OSHA) has not implemented the task force’s recommendation to improve its coverage of inspections by including hazardous waste incinerators on OSHA’s lists of programmed inspections. The task force’s recommendation was intended to make sure that hazardous waste incineration facilities were targeted for programmed inspections. However, because of the manner in which OSHA targets high-risk industries for programmed inspections, no incinerators are inspected unless OSHA responds to a complaint, a referral, or an accident. We did not assess or evaluate what impact OSHA’s policy for targeting and inspecting high-risk industries has on workers’ health and safety and, as such, do not have a position on this policy. Nevertheless, the fact remains that OSHA’s choice of actions did not result in the implementation of the task force’s recommendation. The only inspections that were performed were in reaction to complaints or referrals. Programmed inspections are broad in scope and are separate from and above OSHA’s inspections in response to complaints, referrals, and fatalities/catastrophes, which are more narrow in scope. 2. We continue to believe that OSHA has not implemented the task force’s recommendation that OSHA improve its inspection expertise. We have revised the report to point out that we recognize that OSHA does have a training program for its enforcement officials that includes hazardous waste, and while improvements have been made to this training program, none of these improvements were made as a result of the task force’s recommendation. Our discussions with officials in OSHA’s Training Institute and OSHA’s Directorate of Policy and Office of Field Programs reveal that improvements in the training program were not a result of the task force’s recommendation. 3. While the 1990 memorandum of understanding between OSHA and the EPA’s Office of Enforcement may have the potential for enhancing OSHA’s inspection expertise, this memorandum did not result in any such improvement because no joint OSHA-EPA inspections were conducted at incinerators following the task force’s inspections. As discussed in the report, EPA’s Office of Enforcement did not have oversight responsibilities for regional or state compliance activities at hazardous waste incineration facilities. Also, this office did not provide information to EPA’s compliance staff who were responsible for directing EPA’s regional and state compliance activities. Because EPA did not direct EPA regions, the regions did not suggest that states coordinate with OSHA when inspecting combustion facilities, and because no joint inspections occurred after 1990 the memorandum was not fully carried out. Thus, improvements in OSHA’s inspection expertise have yet to be demonstrated as a result of the task force’s recommendation or this memorandum. 4. We continue to believe that OSHA has no means to ensure that all hazardous waste facilities will have accredited worker training programs. It is the intent of OSHA’s new training program standard that all employees working on-site and exposed to hazardous substances will receive OSHA’s accredited training. However, as we pointed out, OSHA has no means of ensuring compliance, since (1) OSHA and the states have conducted few inspections at hazardous waste incineration facilities, (2) OSHA considers these facilities a low risk in relation to other industries, and (3) OSHA and the states would inspect these facilities only if they are randomly selected or in response to complaints, referrals, or accidents. Our recommendation that OSHA work with EPA to develop a means of ensuring that all hazardous waste facility employers submit their training programs and receive accreditation could provide OSHA with a more comprehensive means of determining compliance with OSHA’s new accreditation requirement. To review the status of implementing the task force report’s recommendations, we obtained documentation on EPA’s follow-up actions, education provided to industry, education provided to compliance officials, inspection coverage, research about certain operating equipment, and review of permits from the Resource Conservation and Recovery Act (RCRA) Enforcement and Permits and State Programs Divisions, Office of Research and Development, EPA, and from industry combustion experts. We also obtained documentation on OSHA’s follow-up actions, education provided to industry, education provided to compliance officials, inspection coverage, inspection priorities, and field office guidance from staff in OSHA’s Directorate of Policy, Office of Statistics and Office of Field Programs. To determine the results of subsequent inspections and enforcement actions at the 29 facilities we reviewed, we interviewed and obtained documentation on the inspections conducted, violations detected, enforcement actions, and penalties assessed and collected during January 1, 1991, through December 31, 1993, from headquarters’ officials in EPA’s RCRA Enforcement Division and OSHA’s Office of Policy and from cognizant regional and area office officials. We also interviewed and obtained data from state environmental officials in Alabama, Arkansas, Connecticut, Idaho, Illinois, Kentucky, Louisiana, Michigan, Montana, New Jersey, New York, Ohio, South Carolina, and Texas and from state OSHA officials in Kentucky, Michigan, and South Carolina. To determine other actions taken by EPA and OSHA to improve workers’ and the public’s health and safety at hazardous waste incineration facilities, we interviewed and obtained documentation on EPA’s and the states’ enforcement actions and draft waste minimization and combustion strategy, and OSHA’s proposed policies and procedures for Hazardous Waste Training Accreditation from (1) EPA’s Office of Permits and State Programs and RCRA Enforcement Divisions and (2) OSHA’s Directorate of Policy, Office of Health and Safety Standards Program, Office of Field Programs, and Office of Statistics. We conducted our review from September 1993 through December 1994 in accordance with generally accepted government auditing standards. David W. Bennett, Evaluator Richard P. Johnson, Attorney Gerald E. Killian, Assistant Director Marcia B. McWreath, Evaluator-in-Charge Rita F. Oliver, Evaluator James L. Rose, Evaluator Bernice Steinhardt, Associate Director The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed hazardous waste incinerators' compliance with federal health and safety regulations, focusing on the: (1) Environmental Protection Agency's (EPA) and Occupational Safety and Health Administration's (OSHA) efforts to protect workers' health and safety at hazardous waste incinerators; and (2) results of inspections and enforcement actions at 29 facilities. GAO found that: (1) EPA and OSHA have fully implemented three task force recommendations to correct violations, educate the industry, and improve inspections; (2) EPA and the states have initiated enforcement actions and collected over $2 million in penalties for safety violations; (3) EPA and OSHA have conducted education outreach programs on the importance of health and safety compliance; (4) EPA has taken steps to educate its compliance officials, but it has not fully implemented recommendations to improve EPA inspection coverage, conduct research on the use of certain operating equipment, and revise facilities' incineration permits to limit the use of this equipment if necessary; (5) OSHA has not fully implemented recommendations on educating its compliance officials and improving its inspection coverage; (6) EPA and the states reinspected the incinerator facilities, but they did not detect the same pattern of violations that the task force found because the scope of their inspections differed; (7) OSHA did not reinspect the facilities because it believes that the relative risk of working at these incinerators is low; (8) EPA and OSHA have taken additional steps beyond the task force's recommendations to protect workers' health and safety at the incinerators; and (9) OSHA plans to require these facilities to have accredited training programs for workers handling hazardous wastes, but OSHA does not have a good plan to ensure that all facilities submit their programs for accreditation.
MDA’s BMDS is being designed to counter ballistic missiles of all ranges—short, medium, intermediate, and intercontinental. Because ballistic missiles have different ranges, speeds, sizes, and performance characteristics, MDA is developing multiple systems that, when integrated, provide multiple opportunities to destroy ballistic missiles in flight for the strategic defense of the United States and regional defense of its deployed forces and allies. The BMDS architecture includes space- based sensors, ground- and sea-based radars, ground- and sea-based interceptor missiles, and a command and control, battle management, and communications system to provide the warfighter with the necessary communication links to the sensors and interceptor missiles. Table 1 provides a brief description of some of the BMDS systems, which MDA refers to as elements, and programs included in this year’s assessment. More details can be found in our report. When MDA was established in 2002, the Secretary of Defense granted it exceptional flexibility to set requirements and manage the acquisition of the BMDS in order to quickly deliver protection against ballistic missiles. This decision enabled MDA to rapidly deliver assets, but we have reported that it has come at the expense of transparency and accountability. Examples of key problems we have cited in reports in recent years and which continue to affect MDA’s acquisitions are highlighted below. MDA’s highly concurrent acquisition approach has led to significant cost growth, schedule delays, and in some cases, performance shortfalls. Concurrency is broadly defined as the overlap between technology development and product development or between product development and production. While some concurrency is understandable, committing to product development before requirements are understood and technologies are mature or committing to production and fielding before development is complete is a high-risk strategy that often results in performance shortfalls, unexpected cost increases, schedule delays, and test problems. At the very least, a highly concurrent strategy forces decision makers to make key decisions without adequate information about the weapon’s demonstrated operational effectiveness, reliability, and readiness for production. According to MDA officials, they have taken some steps to identify and track concurrency in their programs. However, high levels of concurrency adopted earlier for some programs persist today. Testing disruptions have reduced the knowledge planned to be available to inform acquisition decisions and understand performance. For example, flight test failures disrupted MDA’s acquisitions of several components and forced MDA to suspend or slow production of three out of four interceptors, including the GMD interceptor and the Aegis BMD Standard Missile-3 Block IB (SM-3 Bock IB). In the Ground-based Midcourse Defense (GMD) case, because MDA moved forward years ago with CE-I and CE-II interceptor production before completing its flight testing program, test failures have exacerbated disruptions to the program. Specifically, because the program has delivered approximately three-fourths of the interceptors for fielding, it faces difficult and costly decisions on how it will implement corrections from prior test failures. Additionally, after fielding these assets, the program has had to add tests that were previously not planned, in order to assess the extent to which prior issues were resolved. It also had to delay tests that were needed to understand the system’s capabilities and limitations. MDA has been challenged to meet some of its goals for the European Phased Adaptive Approach (EPAA). During the past several years, MDA has been responding to a mandate from the President to develop and deploy new missile defense systems in Europe. This four-phase effort was designed to rely on increasingly capable missiles, sensors, and command and control systems to defend Europe and the United States. Each successive phase is expected to defend larger areas against more numerous and more capable threat missiles. DOD delivered the first phase, for short and medium range defense of Europe, in December 2011, and has been making progress in developing some systems to support future phases. However, in March 2013, the Secretary of Defense canceled two programs, planned for the fourth phase, thus eliminating the fourth phase, which was intended to provide additional layer for defense of the United States against intercontinental ballistic missiles. The cancelations were driven in part by affordability concerns, schedule delays and technical risks associated with these programs. Our previous work found similar issues with other EPAA efforts. We also found that MDA has lacked a comprehensive management approach to synchronize key EPAA activities. Finally, MDA’s acquisition baseline reporting has provided limited insight into the cost and schedule progress of the BMDS. Due to the acquisition flexibilities it has been granted, BMDS’s entrance into DOD’s acquisition process is deferred, and laws and policies that generally require major defense acquisition programs to take certain steps at certain phases in the acquisition process will not apply until the program enters this process. For example, major defense acquisition programs are generally required to document key performance, cost, and schedule goals in an acquisition baseline at certain phases in the acquisition process; because BMDS has not progressed through threshold phases of the DOD acquisition process, this requirement is not yet applicable. To improve the transparency and accountability of BMDS development efforts, Congress has enacted legislation requiring MDA to establish some baselines. MDA reported baselines for several BMDS programs to Congress for the first time in its June 2010 BMDS Accountability Report (BAR). Specifically, MDA’s baselines, including resource and schedule baselines, are reported in the BAR and are updated annually. Since 2011, although progress has been made to improve the reporting, we have found issues affecting the usefulness of MDA’s acquisition baselines for oversight due to (1) a lack of clarity, consistency, and completeness; (2) a lack of high-quality supporting cost estimates and schedules; and (3) instability in the content of the baselines. Our work has recommended a number of actions that can be taken to address the problems we identified. Generally, we have recommended that DOD reduce concurrency and more closely follow knowledge based acquisition practices. We also made recommendations designed to reduce testing risk, and to improve schedule and cost reporting. DOD has generally concurred with our recommendations, and has undertaken some actions to reduce acquisition risk, and improve accountability and transparency. This year we found that MDA gained important knowledge about the BMDS system-level performance and individual elements by successfully executing several flight tests. We also found that MDA further improved some of its acquisition practices for managing the European Phased Adaptive Approach (EPAA) and improved the clarity of its resource and schedule baselines. In April 2014, we reported that MDA made progress in demonstrating the systems’ capabilities by conducting the first system-level operational flight test in September 2013. This is a significant achievement because it is the first time that MDA conducted an operational flight test that involved multiple elements working simultaneously. The test involved warfighters from several combatant commands, and according to independent testing officials, recreated a potentially realistic scenario. During this test, MDA launched two medium-range ballistic missile targets, including its newly developed air-launched extended-medium range ballistic missile (eMRBM). Both the Aegis SM-3 Block IA and THAAD successfully intercepted their targets, demonstrating progress towards achieving an integrated BMDS. In addition, the Aegis BMD SM-3 Block IB and GMD programs successfully conducted developmental flight tests in 2013 that demonstrated key capabilities and modifications made to resolve prior issues. Specifically, the Aegis BMD SM-3 Block IB intercepted all targets in its last three flight tests. GMD also successfully conducted a non- intercept flight test of its CE-II interceptor, demonstrating the performance of a guidance component that MDA redesigned in response to a December 2010 flight test failure. We also found that DOD improved the acquisition management of EPAA. In our first report on the subject in 2010, we assessed progress of EPAA acquisition planning against six key acquisition principles that synchronize acquisition activities and ensure accountability.that DOD has established testing and acquisition plans for technology We found development and engineering, and had begun work on identifying key stakeholders. This year, we found improvements in these areas. For example, DOD completed identifying EPAA stakeholders and in 2012 issued a directive updating the warfighter role in testing and capability acceptance. Lastly, in April 2014, we found that MDA continued to improve the clarity of its resource and schedule baselines, which are reported to Congress in its annual acquisition report called the BAR. In its 2013 BAR, MDA continued to incorporate useful changes it made last year, and took some additional actions to improve the completeness and clarity of the BAR baselines by: identifying the date of the initial baseline and, if applicable, the date when the initial baseline was most recently revised; explaining most of the significant cost and schedule changes from the current baseline estimates against both the estimates reported in the prior year’s BAR and the latest initial baseline; and making the baselines easier to read by removing cluttered formatting such as strikethroughs and highlights that made some of the events listed in past BARs unreadable. Although MDA has taken some steps to improve its acquisitions, the agency continues to face several challenges that we have found in previous reviews. Specifically, it faces challenges stemming from high- risk acquisition practices, as well as challenges in BMDS testing, managing the development of EPAA capabilities, and reporting resource and schedule baselines that support oversight. Until MDA addresses these challenges, the agency and decision makers may not obtain the information needed to assess the capabilities of the BMDS or make informed acquisition and investment decisions. While MDA has gained important insights through testing and taken some steps to improve management and increase transparency, it still faces challenges stemming from higher-risk acquisition strategies that overlap production activities with development activities. While some concurrency is understandable, committing to production and fielding before development is complete often results in performance shortfalls, unexpected cost increases, schedule delays, and test problems. It can also create pressure to keep producing to avoid work stoppages. Our April 2014 report found that Aegis BMD SM-3 Block IB and GMD, which have already produced some of their assets before completing testing, discovered issues during testing that could affect or have affected production. Although both programs demonstrated progress in resolving previous issues, some of which stemmed from their concurrent acquisition strategies, testing revealed new issues. Specifically: An interceptor failure during a September 2013 test of Aegis BMD SM-3 Block IB means that a key component, common to the deployed SM-3 Block IA, may need to be redesigned and flight tested. While the failure review is not yet complete, if a redesign is necessary, interceptors that were already produced may require retrofits. MDA continues to procure new SM-3 Block IBs while it investigates the cause of the failure. A GMD CE-I interceptor failure in a July 2013 flight means that MDA did not demonstrate the interceptor could perform under more challenging conditions than previously tested, further delaying knowledge of the interceptors performance capability. Additionally, the failure precluded confirmation that previous design changes improved performance, and delayed the upcoming test needed to resume production of CE-II interceptors. According to program officials, the failure review is not complete, but the failure could have been caused by a component common to both the CE-I and CE-II interceptors. It is still unclear what, if any, corrective action will be needed. The GMD program has had many years of significant and costly disruptions caused by production getting well ahead of testing and then discovering issues during testing. Consequently, even though some assets have already been produced, MDA has had to add tests that were previously not planned and delay tests that are necessary to understand the system’s capabilities and limitations. Additionally, since it has delivered approximately three-fourths of its interceptors, MDA faces difficult and costly decisions on how it will implement corrections from prior test failures. As a result of these development challenges, the GMD program will likely continue to experience delays, disruptions, and cost growth. We made recommendations to address the ongoing issues with both systems in our April 2014 report. First, we recommended that the Secretary of Defense direct MDA’s Director to flight test any modifications that may be required to the Aegis SM-3 Block IB, before the Under Secretary of Defense, Acquisitions, Technology, & Logistics approves full production allowing the program to manufacture the remaining interceptors. Second, we also recommended testing the fielded GMD CE- I interceptor in order to complete the original purpose of the failed test to (1) demonstrate the CE-I’s effectiveness against a longer range threat in more challenging conditions, and (2) confirm the effectiveness of previous upgrades as well as (3) confirm any new modifications to address the failure work as intended. DOD partially concurred with the recommendation on the Aegis SM-3 Block IB, stating that MDA will verify the efficacy of any modifications by testing and that the full production decision will be vetted through the DOD process. DOD did not agree with the recommendation on GMD, stating that the decision to flight test the interceptor will be made by the Director, MDA, based on the judgment of other stakeholders. In this year’s reports, we found that testing has provided less knowledge than initially planned.experienced testing shortfalls, including failures of Aegis and GMD interceptors I mentioned above. The agency also combined, delayed, and deleted some tests, and eliminated test objectives in others. These changes reduced the knowledge expected to be available to understand the capabilities and limitations of the BMDS. Examples of key testing problems we cited in this year’s reports are: While MDA accomplished some testing goals, it Operational Integration—Although the September 2013 operational flight test demonstrated layered defense between Aegis BMD and THAAD, the Director, Operational Test and Evaluation concluded that the test did not achieve true integration. Specifically there were system network issues, interoperability limitations, and component failures. For example, the test uncovered several issues with communication networks that are needed for interoperability between the elements. Interoperability is important because it can improve missile defense effectiveness and mitigate some limitations of the systems working alone. Test plan revisions continue to reduce the knowledge planned to be available to understand BMDS performance and inform acquisition decisions. In our March 2014 and April 2014 reports, we found that MDA combined, delayed, and deleted some tests, and eliminated test objectives in others. For example, MDA had to make some adjustments to its September 2013 operational flight test, reducing the number of targets from five to two and removing the participation of more mature elements. The agency also reduced the number of ground tests, which are used to assess performance and interoperability. While MDA added other ground tests to mitigate some effects of this reduction, they are smaller in scope and may not provide the same amount of data about how the systems work together. Previously GAO has made recommendations to improve MDA’s ability to gather expected knowledge from testing. For example, we recommended that MDA add non-intercept tests for new targets and ensure that its test plan can absorb unforeseen events, like failures, in order to minimize disruptions to the test schedule. We also recommended that MDA synchronize its testing with development and delivery schedules for its MDA generally concurred with our recommendations, but has assets.not fully implemented them. In March 2014, we found that while MDA made further improvements to the way it manages EPAA, it has yet to develop or implement a complete Specifically, MDA management strategy for synchronizing these efforts.has not established an integrated schedule and has yet to completely define EPAA requirements. As a result, it remains unclear how different EPAA efforts are aligned together and what constitutes success in delivering EPAA capabilities. Considering that defensive capability planned for EPAA increasingly depends on integrated performance of the participating systems, an acquisition approach that identifies and synchronizes all needed activities becomes increasingly important. While flexibility is a hallmark of the EPAA policy, it also increases the risk of delivering less capability than expected without demonstrating the actual performance of what is delivered. In fact, our March 2014 report found concurrency, fragmentation of development activities, and delays for some originally planned capabilities. For example, we found that some systems may be delivered later than originally anticipated for integration activities. This reduces the time to discover and correct issues. We also found schedule delays that reduced both the capability MDA plans to deliver and the understanding of how that capability will perform. For example, although MDA delivered the first set of capability in December 2011, an upgrade originally planned for 2014, is now expected in 2015. Additionally, we found that MDA split the delivery of capability it initially planned to deliver in 2015 into two segments. It now plans to deliver what it calls “basic” or “core” capability in 2015 and the remainder in 2017. Similarly, MDA also realigned its plans for the capability it initially planned for 2018 into two segments— designating a subset of originally planned capability to be delivered in 2018, with the remainder in 2020 or later. Finally, MDA postponed its plans to conduct a formal system-level end-to-end assessment of EPAA capabilities because of concerns with data reliability associated with such tests. MDA is currently making investments to develop the tools it needs to improve the reliability of their system-level assessments, but they are expected to be ready after two-thirds of EPAA capabilities have been delivered. We have previously made recommendations to improve management of EPAA, which are highlighted in this year’s report. Although DOD generally concurred with these recommendations, it has not yet fully implemented them. Although we found in March 2014 that MDA took some additional steps to improve the clarity of its resource and schedule baselines, this was the fourth year that we have found MDA’s resource baselines are not sufficiently reliable to support oversight. Additionally, issues with the content and presentation of the schedule baselines continue to limit the usefulness of the information for decision makers. According to agency officials, MDA is taking steps to improve the reliability of their resource baselines, however, until MDA completes these efforts, its baselines will not be useful for decision makers to gauge progress. Since MDA first reported baselines in June 2010, we have found that the underlying information supporting its resource baselines does not meet best practice standards for high-quality cost estimates. baselines reported in its 2013 BAR remain unreliable because the agency is still in the process of improving the quality of the cost estimates that support its baselines. For example, MDA has not fully implemented its cost estimating handbook. In April 2013, we reported that, in June 2012, MDA completed an internal Cost Estimating Handbook, largely based on GAO’s Cost Estimating and Assessment Guide which, if implemented, could help address nearly all the shortfalls we identified. According to MDA officials, the agency is still in the process of applying that handbook to its cost estimates and therefore revised estimates for BMDS elements included in the 2013 BAR were not ready for our review. MDA has not obtained independent cost estimates of the reported baselines. Officials from DOD’s Office of the Director for Cost Assessment and Program Evaluation told us that although they examined costs for some BMDS elements over the last two years, they have not completed a formal independent cost estimate for a BMDS element since 2010. GAO, GAO Cost Estimating and Assessment Guide, GAO-09-3SP (Washington, D.C.: March 2009). MDA’s cost estimates reported in the 2013 BAR do not include operation and support costs funded by individual military services. In April 2013, we found that MDA was not reporting the operation and support costs borne by other military services and concluded that as a result MDA’s reported costs may significantly understate the full costs for some BMDS elements. We recommended MDA include these costs in its resource baselines reported in the BAR.full costs of DOD programs, but the department stated that the BAR should only include content for which MDA is responsible. However, limiting the baseline reporting to only MDA costs precludes decision makers from having insight into all the costs associated with MDA’s weapons systems. We continue to believe that reporting these costs would aid both departmental and congressional decision makers as they make difficult choices of where to invest limited resources.DOD does not currently report the full costs for MDA’s missile defense acquisitions. DOD agreed that decision makers should have insight into the In the National Defense Authorization Act for Fiscal Year 2014, Congress took steps to address concerns over MDA’s cost estimates. As a result, we did not make any new recommendations regarding cost this year. However, we plan to continue to monitor MDA’s progress because establishing high-quality cost estimates that are accurate, credible, and complete is fundamental to creating realistic resource baselines. In April 2014, we also found that assessing MDA’s progress in achieving its schedule goals is difficult because MDA’s 2013 schedule baselines are not presented in a way that allows decision makers to understand or easily monitor progress.identify numerous events, but provide little information on the events and why they are important. In addition, MDA’s schedule baselines do not present any comparisons of event dates. Because MDA’s schedule baselines only present current event dates, decision makers do not have the ability to see if and how these dates have changed. For instance, MDA’s schedule baselines We recommended that the Secretary of Defense direct the MDA Director to improve the content of the schedule baselines by highlighting critical events, explaining what these events entail and why they are important, and by presenting information in a format that allows identification of changes from the previous BAR as well as from the initial baseline. DOD concurred with our recommendation. This concludes my statement, I am happy to answer any questions you have. For future questions about this statement, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to the work this statement is based on include David B. Best and Patricia Lentini, Assistant Directors; Susan C. Ditto; Aryn Ehlow; Wiktor Niewiadomski; John H. Pendleton; Karen Richey; Brian T. Smith; Jennifer Spence; Steven Stern; Robert Swierczek; Jay Tallon; Brian Tittle; and Hai V. Tran; Alyssa Weir; and Gwyneth B. Woolwine. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In order to meet its mission, MDA is developing a diverse group of BMDS components including (1) land-, sea-, and space-based sensors; (2) interceptors; and (3) a battle management system. These systems can be integrated in different ways to provide protection in various regions of the world. Since its inception in 2002, MDA has been given flexibility in executing the development and fielding of the ballistic missile defense system. This statement addresses recent MDA progress and the challenges it faces with its acquisition management. It is based on GAO's March and April 2014 reports and prior reports on missile defense. The Department of Defense's (DOD) Missile Defense Agency (MDA) made progress in its goals to improve acquisition management, and accountability and transparency. The agency gained important knowledge for its Ballistic Missile Defense System (BMDS) by successfully conducting several important tests, including the first missile defense system-level operational flight test. Additionally, key programs successfully conducted developmental flight tests that demonstrated key capabilities and modifications made to resolve prior issues. MDA also made some improvements to transparency and accountability. For example, MDA improved the management of its acquisition-related efforts to deploy a missile defense system in Europe and MDA continued to improve the clarity of its resource and schedule baselines, which are reported to Congress for oversight. Although some progress has been made, MDA acquisitions are still high risk, due to inherent technical and integration challenges, tight timeframes, strategies that overlap development and production activities, and incomplete management tools. More specifically: MDA faces challenges stemming from higher-risk acquisition strategies that overlap production activities with development activities. While some concurrency is understandable, committing to production and fielding before development is complete often results in performance shortfalls, unexpected cost increases, schedule delays, and test problems. GAO found that the Aegis Ballistic Missile Defense SM-3 Block IB and Ground-based Midcourse Defense programs, which have already produced some of their assets before completing testing, discovered issues during testing that have affected or continue to affect production. Testing continues to fall short of goals. For example, the first ever system-level operational flight test failed to demonstrate true integration. MDA also combined, delayed, and deleted some tests, and eliminated test objectives in other tests. These challenges reduced the knowledge they had planned to obtain in order to understand the capabilities and limitations of the BMDS. MDA has not yet fully developed or implemented a complete management strategy for synchronizing its efforts to deploy missile defense in Europe. As a result, it remains unclear how different European Phased Adaptive Approach (EPAA) efforts are aligned together and what constitutes success in delivering capabilities in Europe. Issues with the content and presentation of resource and schedule baselines continue to limit their usefulness as management tools. For the fourth year, GAO has found that MDA's cost estimates are unreliable for some BMDS elements and do not include certain costs for military services which may significantly understate total costs. Recently, Congress took steps to require that improvements be made to MDA's cost estimates, so GAO did not make any new cost recommendations. MDA's schedule baselines continue to be presented in a way that makes it difficult to assess progress. For instance, MDA's schedule baselines identify numerous events, but provide little information on the events and why they are important. In April 2014, GAO recommended that MDA verify any changes needed for the SM-3 Block IB missile through flight testing before approving full production; retest the fielded GMD interceptor to demonstrate performance and the effectiveness of changes; and take actions to improve the clarity of its schedule baselines. DOD partially concurred with the recommendation on the SM-3, stating that MDA will verify the efficacy of any modifications by testing and that the production decision will be vetted through the DOD process. DOD did not agree with the recommendation on GMD, stating that the decision to flight test the interceptor will be made by the Director, MDA, based on the judgment of other stakeholders. GAO previously made recommendations on EPAA and testing. DOD generally concurred with them. GAO continues to believe all recommendations are valid.
This section provides background information on (1) asset management for water utilities, (2) federal funding for asset management, (3) water utilities’ structures, and (4) EPA’s infrastructure needs assessments. To assist water utilities in adopting asset management, in 2003, EPA developed an asset management framework for water utilities. In 2008, EPA incorporated this framework into a best practices guide for water utilities based on similar frameworks used by water utilities in Australia and New Zealand. EPA’s asset management framework instructs water utilities to (1) assess the current state of their assets, (2) determine the level of service they need to provide to customers, (3) identify those assets that are most critical to their operations, (4) incorporate life-cycle costs, and (5) develop a strategy for the long-term funding of the repair and replacement of their assets. As shown in figure 1, EPA’s 2008 best practices guide describes the five components in EPA’s asset management framework, which are characterized by a range of practices. According to EPA’s best practices guide, these practices can be implemented at varying levels of sophistication, depending on the size and needs of the utility. For example, a small water utility with few assets can document its inventory of assets on paper, although a large water utility with many assets may use a software program. Together, according to EPA’s 2008 best practices guide, these practices make up a water utility’s asset management program and are to be documented in the water utility’s asset management plan. The asset management plan serves as a written record that the water utility can use much like a budget or strategic planning document to communicate plans, progress, and future goals and also communicate user rate adjustments and recommended infrastructure investments. According to an EPA fact sheet on building an asset management team, asset management requires water utility staff who can promote and articulate the benefits of asset management. The fact sheet further states that a successful asset management program requires resources, including time and money, to implement, as well as the support of political leaders who have the authority and willingness to commit public resources and personnel. We and others have cited examples of cost savings resulting from asset management. In our March 2004 report, in addition to the water utility in California that saved $12 million, we found that a water utility in Massachusetts used asset management and saved $20,000 in oil purchase and disposal costs for its pumps and decreased the hours spent on preventive maintenance by 25 percent from the hours recommended by the equipment manufacturer. In addition, a 2007 study of asset management by the U.S. Conference of Mayors also found that public water utilities in cities had experienced savings in capital costs and operations and maintenance as a result of asset management. Further, a 2008 EPA fact sheet about asset management for local officials stated that implementing asset management may require some up-front costs but could result in cost savings for water utilities. In their 2011 Memorandum of Agreement, EPA and USDA agreed to collaborate in promoting ways that small water utilities could better manage their infrastructure needs and highlighted the use of asset management to ensure long-term technical, managerial, and financial capacity. The agencies also agreed to coordinate agency activities and financial assistance in areas that would increase the technical, managerial, and financial capacity for small water utilities. The memorandum stated that EPA and USDA would encourage communities to implement system-wide planning, including asset management, and that the two agencies would share and distribute resources to water utilities and provide training and information. In this same memorandum, EPA and USDA stated that both agencies supported increasing the technical, managerial, and financial capacity of water utilities nationwide. EPA and USDA funding for asset management activities falls under various larger programmatic budget categories. EPA funds asset management in the following three categories: (1) grants to provide training and technical assistance to water utilities to improve financial and managerial capacity; (2) grants to selected public, private universities or colleges, and nonprofit organizations to provide technical assistance to communities on a range of EPA priorities, including improving financial capacity; and (3) drinking water SRF grants to states, a portion of which may be used for increasing water utilities’ technical, managerial, and financial capacity. USDA primarily funds asset management activities through two programs: (1) the Water & Waste Disposal Technical Assistance & Training Grants program, which provides grants to nonprofit organizations in the 50 states for managerial technical assistance, and (2) the Circuit Rider program, which provides training and technical assistance through contracted staff called circuit riders, in each of the 50 states to provide technical assistance to small water utilities on day-to- day operational, managerial, and financial issues. Appendix III provides more details about EPA and USDA funding for asset management. Small communities share some common characteristics in how they manage (govern and staff) their water utilities, according to EPA’s 2011 report on the characteristics of small water utilities. The report, and EPA, have made several observations about small water utilities and the small communities they serve. Namely, publicly-owned water utilities are typically municipalities, townships, counties, or other public entities. These entities can be governed by boards, mayors, managers, or city or town councils. Privately-owned water utilities are typically governed by corporate entities, homeowner associations, or sole proprietors. For both publicly- and privately-owned water utilities, the governing bodies are responsible for ensuring the water utility complies with state and federal laws and regulations; setting and approving annual budgets; hiring staff; and in many cases, setting and adjusting the rates that users pay. EPA’s 2011 report also states that small water utilities are typically staffed with an operator (or superintendent), managers, and administrative staff that may work part-time. In some cases, a publicly-owned water utility may hire a private company to operate and maintain its facility. EPA estimates the nation’s drinking water and wastewater utilities’ capital infrastructure needs over the next 20 years by administering to states two needs assessment surveys: the Drinking Water Infrastructure Needs Survey and Assessment and Clean Watersheds Needs Survey every 4 years. In completing the questionnaire for the drinking water needs assessment survey, utilities report infrastructure needs to EPA and for the clean water needs assessment survey states report these infrastructure needs to EPA. EPA then uses the data from the drinking water needs assessment survey to determine each state’s grant allocation for its Drinking Water SRF program. According to EPA officials, the agency does not use the clean water needs assessment survey to determine each state’s allocation of Clean Water SRF program funds, but it reports the data to Congress and the public. EPA works with states and the Office of Management and Budget (OMB) to produce the surveys. The questionnaires for both needs assessment surveys ask about water utilities’ infrastructure needs, including those assets that are in need of replacement or rehabilitation. EPA officials said that they accept certain documents as support for the states’ cost information, including SRF loan applications, capital improvement plans, and asset management plans. To support their reported infrastructure needs, some water utilities submitted documentation that showed the use of asset management practices, according to the results of the 2011 drinking water needs assessment survey, and other utilities’ supporting documentation illustrated continuing gaps in knowledge about the condition and remaining useful life of their infrastructure. The results of the 2008 clean water needs assessment survey also highlighted water utilities’ use of asset management and featured examples of states that used asset management practices to determine the costs of projects submitted to EPA. The small water utilities we interviewed in our sample of 10 states are implementing some asset management practices, and the state SRF officials we interviewed in these states said that large water utilities are more likely to implement asset management practices than small water utilities. EPA, state SRF, and USDA officials in our review identified benefits that could result from water utilities’ use of asset management practices, as well as challenges water utilities face in implementing them. Officials we interviewed from small water utilities in the selected states said that they are implementing some asset management practices, and state SRF program officials in these selected states indicated that large utilities are generally more likely to implement asset management. Officials we interviewed at the 25 small water utilities we selected for our review generally told us they were implementing some of the asset management practices EPA identified in its 2008 asset management best practices guide, but we found differences in the extent to which these small water utility officials were implementing these practices. We discuss what we found using EPA’s framework, which consists of the five components and a written asset management plan. Current state of assets. EPA’s 2008 best practices guide states that water utilities, in assessing the current state of their assets, should know what assets they own, what condition they are in, and their remaining useful life—that is, how much longer the water utility expects their assets to last. EPA recommends that water utilities (1) compile this information into an asset inventory that lists each asset’s age, condition, service history, and remaining useful life and (2) develop maps that identify the location of these assets. Officials we interviewed from 8 of 25 of the small water utilities we reviewed in the selected states told us they had an inventory listing all of their assets, and 19 of 25 told us they had an inventory that listed at least some of their assets. Of the 8 small water utilities that had complete asset inventories, 2 of 8 included information on each asset’s physical condition, and 3 of 8 included an estimate of each asset’s remaining useful life, according to the officials we interviewed with these utilities. These officials described various types of inventories, ranging from a list of assets included on insurance documents to a software program that included information about the assets’ age, condition, service history, and remaining useful life. Officials at almost all (23 of 25) of the small water utilities in the selected states told us they had maps that identify the location of at least some of their assets. These officials described a range in the types of maps used by their water utilities, including one official who described using maps dating back to the 1980s, but others described using maps generated with a geographic information system (GIS) to locate a water utility’s assets. Level of service. EPA’s 2008 best practices guide states that water utility operators need to know the level of service they will provide— that is, (1) what customers and stakeholders demand, (2) what regulators require, and (3) what operators need to know about the actual performance and capabilities of the water utility itself. According to EPA’s 2008 guide, water utilities should also set performance goals related to these three facets of service. Officials at (1) 11 of 25 of the small water utilities in the selected states told us they had performance goals related to customer demand, (2) 19 of 25 said they had performance goals related to meeting EPA and state regulations, and (3) 17 of 25 said they had performance goals related to the actual performance of the system. For example, at a small water utility, an official responsible for managing the community’s water utilities described setting goals to control the loss of treated drinking water from leaky distribution pipes and the loss of untreated wastewater through leaks in the sewer system. According to EPA’s website, leaks in sewer systems can result in sewage overflows, increasing the quantity of water requiring treatment—which, in turn, can increase a wastewater utility’s costs and present public health and environmental risks. The official told us that the water utility compares the amount of water that the drinking water utility produces to the amount of water used by customers, as indicated by their individual water meters, to ensure that no more than 20 percent of the water is lost through leaky distribution pipes. Additionally, this official described comparing drinking water production to flows from the town’s sanitary sewer and wastewater treatment plant to ensure that these overall flows are neither too high nor too low and no less than 90 percent of the community’s drinking water eventually makes it to the wastewater utility for treatment. Other officials we interviewed in the selected states described a range of goals, some of which did not relate to asset management. For example, some officials told us that their goals related to customer demand were to simply keep the water utility operating or to meet peak customer demand. Critical assets. EPA’s 2008 best practices guide states that water utilities need to know which assets are the most critical to sustaining the water utility’s operations, their risk of failing, and the consequences if they do fail. Officials from 18 of 25 of the small water utilities in our selected states told us they had identified their water utility’s critical assets, but officials from 11 of 25 of those utilities said they have assessed the probability of failure for every critical asset. Officials we interviewed in 15 of 25 of the utilities in the selected states told us that, generally, the likelihood and consequences of failure for assets informed their decisions about which infrastructure projects to fund. Officials we interviewed in the selected states described taking a range of approaches to identify and assess their critical assets. For example, an official from one small water utility described the process of identifying and assigning a score (i.e., minor, major, or catastrophic) to each critical asset based on the impact that asset’s failure would have on the environment and customer needs. An official with another small water utility described having enough experience with the water utility to keep mental notes about which assets were critical to the water utility’s operations, and another official described using an online, computer-based system to operate critical assets remotely to monitor the probability of failure. Minimum life-cycle cost. EPA’s 2008 best practices guide states that asset management enables a water utility to determine the minimum life-cycle cost—that is, the lowest cost options for providing the highest level of service over the lifetime of an asset. According to the guide, water utilities can achieve this by scheduling operations and maintenance based on the condition of assets; knowing the costs to repair, rehabilitate, and replace assets; and having specific response plans in case assets fail. Officials from 19 of the 25 small water utilities we reviewed in the selected states told us they conduct regular maintenance, but officials from 9 of 25 said they knew the cost of rehabilitation versus the cost of replacement for all water utility’s assets. For example, one official said that the water utility had not determined the costs of rehabilitation versus replacement because the assets were too old to be considered for rehabilitation. Additionally, officials from 15 of the 25 small water utilities in our selected states had written plans that describe their water utility’s response in the case of asset failure. Concerning written plans to address asset failure, one official described a plan outlining discrete protocols the water utility should take to address asset failures or emergencies, but another official described a list of individuals or repair companies the water utility should notify when an asset fails. Long-term funding plan. EPA’s 2008 best practices guide states that asset management activities related to developing long-term funding plans involve determining whether the water utility has enough funding to maintain its assets based on the required level of service (i.e., customer demands, regulatory requirements, and the capability of the utility’s assets) and whether the user rates are sufficient for the water utility’s long-term needs. EPA’s 2011 report on the characteristics of small water utilities described communities and water utilities as generally separating funds by routine operations and maintenance and capital improvements and that they may also have an emergency fund or reserve fund earmarked for a specific purpose. Officials at 19 of the 25 small water utilities in the selected states told us they had established a reserve fund to cover the cost of short-lived assets, but officials at 11 of the 25 small water utilities told us they had enough funds to cover their water utility’s long-term capital infrastructure needs. For example, an official from one small water utility described using a two-tiered rate structure consisting of a monthly water usage rate and a depreciation fee. This official said that the water utility uses the monthly rate to cover operations and maintenance and short-term capital infrastructure costs and sets the depreciation fee aside to fund long-term capital infrastructure costs. Other officials from small utilities described a range of approaches to planning for the long-term. For example, some small water utility officials told us that the water utility established separate reserves for short- and long-term capital investment needs, but an official from one small water utility described establishing a general surplus account into which the water utility put any surplus funds available at the end of the year for repairs and replacement. Asset management programs and plans. According to EPA’s 2008 best practices guide, asset management is implemented through an asset management program and typically includes a written asset management plan. Officials at the small water utilities we interviewed said that they are implementing asset management practices as a routine course of business rather than a concerted effort to implement a formal asset management program or plan. Therefore, officials at 5 of the 25 small water utilities in the selected states said that they had a written asset management plan. The small utilities in our selected states were implementing some asset management practices, but officials we interviewed with 9 of the 10 state SRF programs in our selected states told us that, generally, the large water utilities in their states were more likely than small water utilities to implement asset management. Similarly, a 2013 market research study by McGraw-Hill Construction found that larger water utilities were more frequently implementing asset management practices than smaller water utilities. Officials from the large utility we interviewed in Maine and the large utility we interviewed in New Mexico said they were implementing what they considered as comprehensive asset management practices, that is, practices as outlined in all five components of EPA’s framework. For example, officials from a large water utility in Maine said that it had a performance goal for the district’s fire hydrants related to the level of service provided to customers—that is, all fire hydrants would be in working order and would not spend more than 3 days out of service. These officials said that the inspections of the fire hydrants were electronically tied to the asset management software, which allows the water utility’s managers to monitor the status of the inspections and track progress related to the performance goal. EPA and USDA headquarters officials and state SRF and USDA officials cited benefits for both water utilities and federal agencies resulting from water utilities’ use of asset management practices. They also cited challenges for water utilities—particularly small water utilities—in implementing asset management practices, particularly the costs of implementing such practices. EPA and USDA headquarters officials and state SRF and USDA officials cited benefits for water utilities that implement asset management, including: (1) cost savings for water utilities that prolong the useful life of their assets and avoid costly emergency repairs, (2) more efficient, focused long-term planning of management and operations, and (3) improved financial health for water utilities. They also cited benefits for federal agencies. Cost savings. EPA headquarters and state SRF and USDA officials told us that water utilities implementing asset management can experience cost savings by prolonging the useful life of the assets they already own through preventive maintenance, including pipe lining and repair, and deferring replacement costs. EPA’s guidance states that preventive maintenance can help water utilities avoid unnecessary additional costs. Officials in our review provided the following examples: An official from one small water utility in Maine told us that the process of creating an asset inventory helped the water utility identify assets they did not know they owned and therefore had not maintained. This official also told us that the utility’s use of asset management helped utility staff assess the condition of the utility’s assets and implement a regular preventive maintenance schedule to maintain those assets. According to the official, this helped the utility avoid larger replacement costs, but he could not estimate the amount of savings. Another official with a small water utility in Idaho told us that the utility used asset management to plan the maintenance and repair of its drinking water reservoir and fire hydrants, which extended their useful life and resulted in cost savings. Another official with a small water utility in Maine told us that the utility assessed the condition of its sewer lines and realized that hydrogen sulfide —a result of the type of materials used to construct the pipes— had built up and put several lines at risk of collapsing. The official said the utility spent $12,000 to remove the hydrogen sulfide and prevented the collapse. More efficient and focused long-term planning of operations and management. State SRF and USDA officials said that water utilities implementing asset management can plan more efficiently for the long- term, such as planning for capital investments, identifying changes in infrastructure needed as a result of population change, hiring or succession planning, improved emergency planning; and making decisions about repairs and replacements. The officials we interviewed highlighted the following examples: Officials at a small water utility in New Mexico told us that having an asset management plan allowed the utility to prioritize its capital investment needs, identify the associated costs, and determine what resources it would need going forward. An official with a small water utility in Arkansas told us that the utility used its asset management plan to assess the effect of a new housing development on its drinking water and wastewater infrastructure over the next 5 years. As a result of this assessment, the water utility was able to set connections and new user fees to recover the costs of adding the housing development without increasing water utility rates for existing residents. An official with a small water utility in Maine told us that because it identifies assets, maintenance schedules, and replacement schedules, creating an asset management plan was the best way for water utilities to transfer decades of knowledge retiring operators and maintenance staff had about the system. The official said that ensuring the continuity of operations and service to the community after employees retire provides some long-term planning. Improved financial health. State SRF and USDA officials told us that water utilities implementing asset management can improve their financial health. EPA headquarters officials said that asset management can help water utilities better budget for capital investments and justify increases in user rates. Asset management also enables water utilities to better account for the value of their capital assets and asset depreciation, which can improve financial transparency and help the utilities with the documentation needed for financial audits. The officials we interviewed highlighted the following examples: An official at a small water utility in Maine told us that the utility used its asset management plan to determine its financial needs, calculate a new user rate to meet these needs, and successfully justify raising rates to the water utility board and its customers. Another official at a small water utility in New Mexico told us that the water utility uses its asset management program to track its finances, including the depreciation of its assets—information that is typically reviewed as part of its financial audit. Benefits to federal agencies. According to state SRF and USDA officials we interviewed, the federal agencies with programs that provide loans and grants to small water utilities to help fund capital infrastructure can also benefit from water utilities’ use of asset management, as follows: State SRF and USDA officials said that a benefit of asset management, for lenders, is knowing that federal funds would be better targeted for infrastructure projects that address a community’s greatest needs and knowing that the federal funds are paying for a project that the community could not afford on its own. EPA officials stated that increased use of asset management by small water utilities would improve the utilities’ assessments of their capital needs, thereby improving the quality of the data collected for EPA’s needs assessments. In addition, EPA officials we interviewed said that water utilities’ use of asset management can result in more accurate information about infrastructure needs, such as costs, and better management of the funds spent on infrastructure repairs and replacement. In addition to benefits, the state SRF and USDA officials we interviewed generally identified the following key challenges small water utilities face in implementing asset management: Costs. According to EPA’s 2008 guidance on asset management for local officials, implementing an asset management program may include start-up costs. For example, SRF officials in one state told us that start-up costs are the largest costs for water utilities, often challenged with limited resources, in implementing asset management. According to state SRF and USDA officials, start-up costs can include (1) purchasing asset management tools, such as software or creating GIS maps, or (2) hiring an engineer or consultant to create an asset management program or plan on water utilities’ behalf. For example, officials with two separate small water utilities in New Mexico told us that they spent $34,000 and $50,000, respectively, to hire a company to create GIS maps of the water utility’s assets and officials with another small water utility in New Mexico told us that they paid an engineer $12,000 to develop an asset management plan. Funding. State SRF and USDA officials we interviewed said that small water utilities have difficulty obtaining funds or anticipate they will have difficulty obtaining funds to cover the start-up and maintenance costs associated with asset management. In describing challenges with funding asset management, for example, officials with a small water utility in New Mexico told us that the utility did not have the funds to pay an engineering firm to develop the needed additional GIS maps with the locations of their assets and would have to apply to a state infrastructure grant program for an additional $50,000. Human resources. According to the state SRF and USDA officials we interviewed, small water utilities do not have human resources to dedicate to asset management. For example, officials with a small water utility we visited in Maine said that, at the time of our review, the Maine Department of Transportation was completing a major road project in the state that affected the buried pipes for multiple communities, including the one in which this utility operated. These utility officials said that work on these pipes in addition to the routine day-to-day responsibilities of operating the utility left the small staff little time to work on asset management. Similarly, an official with another small water utility in Maine told us that one staff person was assigned to develop an inventory of the water utility’s assets, and that finding the time was the greatest constraint to completing the inventory, coordinating with operations and maintenance staff, and implementing additional asset management practices. Information. Acquiring information about how to start or maintain an asset management program was another challenge for small utilities that state SRF and USDA officials cited. For example, officials with a small water utility in New Mexico said that the town leadership was unaware of asset management prior to applying for an infrastructure loan through a state program. As a result, it took some time for the water utility operator and the utility’s board to understand the asset management concept and implement the activities required as part of the state’s loan program. Political support. According to some of the state SRF and USDA officials we interviewed, small water utilities are challenged with garnering and maintaining the political support of elected officials and the local community to begin or maintain an asset management program or increase user rates or expend funds on repairs as a result of implementing an asset management program. For example, an official with a small water utility said that the town’s council was supportive of the recommendations the utility operator made regarding the likelihood of failure for assets and the need to address those assets before they failed. However, the town council did not always implement the recommendations because, among other things, they said they wanted to avoid having to raise user rates to cover the costs. An official with another water utility said that the water utility would benefit from raising user rates incrementally each year, but that elected officials do not want to raise rates, even minimally, because the community would not support such raises. EPA and USDA are taking steps to help small utilities implement asset management and address identified challenges that water utilities face. EPA and USDA recognize the benefits to water utilities and their loan programs and the need for water utilities, particularly small water utilities, to increase their use of asset management, but the agencies do not collect information on asset management that would enable them to track their efforts or compile information on costs and benefits that could be used to encourage wider use. EPA and USDA officials told us that they would like for as many water utilities as possible to increase their managerial and financial capacity, including the implementation of asset management. The officials said they are aware that small water utilities face challenges in implementing asset management and are taking steps to help them. To help small water utilities implement asset management, EPA and USDA provide funding for the development of asset management plans, free or low-cost tools such as software to develop asset management programs and plans, classroom training, and one-on-one technical assistance or coaching. Both agencies provide funding for the development of asset management plans, helping to address the challenges of costs and funding. EPA provides funds that can be used for the development of asset management plans through grants to state drinking water SRF programs. According to state SRF officials in some of our selected states, these funds can help water utilities address challenges in finding the funds to cover the start-up costs of asset management activities. For example, officials with the Maine Drinking Water SRF program told us that the state SRF program uses its drinking water SRF funds to pay up to 75 percent of the cost of developing an asset management plan for water utilities serving populations of fewer than 3,300 people and up to 50 percent for water utilities serving populations of more than 3,300 people. According to these officials, about 15 water utilities have applied for this funding between 2013 and 2015. State officials with the Delaware Drinking Water and Clean Water SRF programs, for example, told us that they recently started a new program providing grants to water utilities for funding activities leading to the development of an asset management plan. As of June 2015, the state had provided a grant ranging from $60,000 to $100,000 to each of the 4 publicly-owned water utilities that participated in the program. State SRF officials in some of the 10 states in our review told us that in meeting state requirements for SRF loans, small water utilities in their states were engaging in some asset management practices. State officials we interviewed provided examples, such as requiring (1) a report of the inventory and condition of the utility (or preliminary engineering report) to show technical capacity; (2) a community to raise user rates to pay back the loan; or, (3) a community to set up a reserve fund to pay for short-lived assets. Officials we interviewed at 10 of the 25 small water utilities in the selected states said that they currently had an SRF loan. USDA officials in the agency’s headquarters and all 10 state offices we interviewed also told us that, as a result of their loan requirements, small water utilities with USDA loans were engaging in some asset management practices. USDA headquarters officials told us that asset management is incorporated throughout their loan conditions. Specifically, USDA state officials said that they consider the following loan conditions to equate to asset management practices: requiring (1) a review of financial audits, (2) a preliminary engineering report, (3) a community to create a reserve to fund debt payments and cover the repair and replacement of short-lived assets, (4) development of an operations and maintenance manual, and (5) the restructuring of user rates to cover the cost of the loan and repair and replacement of short- lived assets. According to some USDA headquarters and state officials, USDA’s state offices also conduct periodic (every 3 years) inspections of the condition of the facilities they fund once they are built. Officials we interviewed at 6 of the 25 small water utilities in the selected states said that they currently had a USDA loan. USDA officials said that their use of the preliminary engineering report is the key way in which the agency introduces its loan applicants to asset management. In 2013, USDA, in conjunction with EPA and other federal agencies and states, issued a preliminary engineering report template, a planning document that, in general, includes an inventory of the category of assets and assessment of the assets in the entire facility (e.g., the assets involved in the project being funded, a map of the assets in the water utility); information about the need for the project (including most critical aging infrastructure and future growth needs); and the costs for the repair, rehabilitation, and replacement of some assets. USDA regulations require loan applicants to submit a preliminary engineering report, and encourage applicants to consult agency guidelines in preparing the report. In a 2013 bulletin to state officials, USDA encourages its state offices to use the preliminary engineering report template. EPA does not require SRF loan applicants to submit a preliminary engineering report, but like USDA, it encourages its use; specifically, it encourages state SRF programs to require its use. EPA officials told us that as of October 2015, 10 state SRF programs had adopted the preliminary engineering report template and 10 other state SRF programs had adopted it and modified it by including additional requirements. Both agencies provide free or low-cost tools for developing asset management programs and plans to help address the challenges of cost and providing information. EPA provides a free asset management software program, and both EPA and USDA provide free tools such as guidebooks, case studies, and other written materials for small water utilities on the agencies’ websites. EPA’s free software program, Check Up Program for Small Systems (CUPSS), allows water utilities to develop asset management programs and plans. EPA officials told us that the original development of CUPSS was funded with SRF funds. Users of CUPSS can enter data into the system to develop an inventory of assets, record information to track the scheduling of maintenance tasks, and produce a written asset management plan. EPA officials told us that, with CUPSS, water utility managers can produce a report specifically communicating the condition of the water utility’s assets to elected officials. EPA also provides free training on how to use CUPSS. The availability of CUPSS also allows utilities to avoid some of the costs they would incur if they were to hire a professional engineering firm to do the same work. For example, an official with a small water utility we interviewed said that he did not incur any monetary costs to implement asset management because he used CUPSS to develop his asset management plan and program. EPA, USDA, and state SRF programs provide classroom training on asset management to help provide information to operators and other staff about how to implement asset management. EPA’s Environmental Finance Center (EFC) at the University of North Carolina at Chapel Hill leads the Smart Management for Small Water Systems project that provides 1-day workshops for operators of water utilities on various aspects of managing a water utility. According to an official with the EFC at the University of North Carolina at Chapel Hill, the EFC partners with other EFCs (including the University of New Mexico and Wichita State University) to conduct the workshops, which include a discussion of asset management. As stated on the website for the EFC at the University of North Carolina at Chapel Hill, from 2012 to 2014, these EFCs held more than 100 workshops, with 2,000 participants, in all 50 states and four U.S. territories. According to USDA headquarters officials, operators of small water utilities and their elected officials participate in financial and managerial training courses provided by organizations, such as the National Rural Water Association, Rural Community Assistance Partnership, and others. According to these officials, these training sessions can include asset management. An official with the National Rural Water Association told us that the organization’s training sessions generally include a component of asset management. Officials with the Rural Community Assistance Partnership told us that their organization provides workshops specifically on implementing asset management, including workshops for elected officials. EPA and USDA officials told us that the agencies’ key collaborative effort is a workshop on water utility management, with the goal of helping to increase water utilities’ managerial and financial capacity. The workshop, based on a 2013 EPA and USDA document entitled Rural and Small Systems Guidebook to Sustainable Utility Management, describes 10 steps in effectively managing a water utility. EPA and USDA officials said that asset management is discussed as part of 1 of the 10 steps. EPA and USDA’s guidebook defines the steps, describes challenges water utilities may face related to the steps and the effects of those challenges, and describes the types of actions taken by high-performing water utilities to address the challenges. The workshop materials focus primarily on the logistics of implementing a workshop. EPA and USDA train technical assistance providers to conduct the workshop for water utilities. Both agencies also provide free materials for the workshop on their websites. A 2015 EPA and USDA report stated that the agencies had trained1,600 persons in workshops across the United States since 2013. EPA agreed with our estimate that the two agencies, together, train about 250 water utilities per year. USDA officials told us that for fiscal year 2015, the agency provided a grant to a technical assistance provider to provide two workshops in each of the 50 states. EPA funds this effort through its Small Systems Training and Technical Assistance Grants program, and USDA funds this effort through its Water & Waste Disposal Technical Assistance & Training Grants program. State SRF programs also use some of their federal funds to provide classroom training, for small water utilities, on a variety of topics related to building small water utilities’ managerial and financial capacity, including asset management. For example, according to Maine Drinking Water SRF officials, Maine’s Drinking Water SRF program provided four $25,000 grants to a local technical assistance provider to train operators of small water utilities and their elected officials on asset management. This classroom training provides water utilities with education and information about asset management and how to implement an asset management program. Both agencies also provide one-on-one technical assistance or coaching on asset management, which helps address the challenges of costs, funding, and providing information. EPA, USDA, and state SRF programs work with many of the same organizations to provide technical assistance services in their states. EPA also reaches small water utilities through its EFCs. The two primary organizations with whom EPA and USDA work to reach small water utilities are the National Rural Water Association and Rural Community Assistance Partnership. EPA and USDA officials have said that their contracts for technical assistance with these two providers are not exclusively for asset management, but that technical assistance providers are trained to and frequently help water utilities implement asset management and develop asset management plans. In conjunction with its workshop on sustainable utility management, EPA and USDA officials told us that technical assistance providers, in their workshops, also conduct follow-up calls to workshop participants and, if necessary, provide one-on-one assistance. The availability of one-on-one technical assistance also allows water utilities to avoid some of the costs they would incur if they were to hire a professional engineering firm to do the same work. For example, an operator with a small water utility in Maine told us that the utility developed its asset management program and plan through CUPSS with the free help of an organization contracted with EPA and USDA to provide technical assistance. This operator said the water utility’s asset management program would not have been developed without this technical assistance. EPA and USDA recognize the benefits of asset management to water utilities and their loan programs and the need for water utilities, particularly small water utilities, to increase their use of asset management. Both agencies—EPA since 2003 and USDA since 2011— have identified asset management as a tool that water utilities can use to increase their ability to address infrastructure needs. In their 2011 memorandum of agreement, EPA and USDA agreed to collaborate in promoting ways that small water utilities can better manage their infrastructure needs and highlighted the use of asset management to ensure long-term technical, managerial, and financial capacity. EPA and USDA agreed to coordinate agency activities and financial assistance in areas that would increase the technical, managerial, and financial capacity of small water utilities. The memorandum also stated that EPA and USDA would encourage communities to implement system-wide planning, including asset management, and that the two agencies would share and distribute resources to water utilities and provide training and information. EPA and USDA officials told us that they want their efforts to result in as many water utilities as possible increasing their managerial and financial capacity, including the use of asset management. However, even though EPA and USDA promote sustainable water infrastructure, and the agencies encourage water utilities to better manage their resources to address the long-term challenges posed by deteriorating infrastructure, limited funds, and declining populations, they do not—and are not required to—collect information on utilities’ use of asset management. Specifically, they do not collect information that tracks the results of their training efforts on utilities’ use of asset management practices or compile information on the benefits and costs of implementing asset management. First, EPA and USDA do not collect information that tracks the results of the agencies’ training efforts (e.g., whether participating utilities use asset management practices). EPA stated in a 2011 policy document describing its plans to promote sustainable water infrastructure that the agency has an interest in tracking the results of the agency’s training. This is consistent with our January 2004 report on selected agencies’ experiences and lessons learned in designing training and development programs and our March 2004 guide on assessing strategic training and development efforts for human capital, in which we reported that evaluating training programs is key to ensuring that training is effective in contributing to the accomplishment of agency goals and objectives. It is also consistent with our September 2005 report on enhancing performance management, which states that information can be used to make decisions that affect future strategies, planning and budgeting, identifying priorities, and allocating resources. Both EPA and USDA collect some information from the water utilities that participate in classroom training or receive one-on-one technical assistance; however, the agencies do not collect information that may allow them to better measure the results of their efforts to assist utilities. EPA collects information on the number of utilities that have taken training per year and reports this as part of its major performance goals. For example, as stated above, since 2011, 250 small utilities per year have taken the EPA and USDA trainings (of the 68,000 utilities nationwide). In addition, both EPA and USDA collect feedback from water utilities on their experience in training. For example, the agencies collect feedback forms from those attending the training to determine how to improve it. However, the information EPA collects does not show whether the water utilities that receive training from the agencies went on to incorporate asset management practices into their work processes or whether these water utilities have improved their managerial or financial capacity. EPA and USDA officials said that information on water utilities’ use and incorporation of asset management would help EPA and USDA understand how the training and technical assistance they provide are affecting utilities’ use of asset management. EPA officials said that they would like to collect data on water utilities’ incorporation of asset management and determine whether these water utilities have improved their managerial or financial capacity, but they do not have the resources to do so. An EPA official said that when the agency was considering a nationwide study in 2006, the agency wanted to study the incorporation, costs, and benefits, of water utilities’ use of asset management. However, EPA found that the costs of such a data collection effort would be in the hundreds of thousands of dollars. In particular, according to an EPA official, in addition to the costs of collecting the data from water utilities, the agency would also face costs in submitting the required Information Collection Request proposal to OMB. USDA officials similarly stated that they would be interested in collecting data on how the water utilities that participate in their sustainability workshops incorporated asset management or other management practices into their work processes. However, a USDA official told us that the agency would have to explore whether it could do a study such as this. Leveraging existing data collection methods to collect information, such as adding questions to existing information collection requests, could be a cost-effective option for EPA and USDA to obtain information on water utilities’ use of asset management. In particular, EPA’s drinking water and clean water infrastructure needs assessment surveys provide national data on water utilities’ infrastructure repair, rehabilitation, and replacement needs. EPA officials said that EPA works in partnership with states to obtain the data, and it works with state officials to convey the importance of the surveys. This work includes discussing the types of questions in the surveys, including any additional policy areas the questions will cover. EPA officials told us that the agency can add questions to the needs assessment surveys and has done so in the past on such policy issues as climate change and energy efficiency; the officials said that few states have responded to the questions, however, because participation by the states and water utilities is voluntary, and states are not required to answer all of the questions in the surveys. EPA officials said that the agency included questions about asset management in the clean water needs assessment survey in 2008 and 2012. Specifically, these officials said, the questions asked water utilities to identify the status of their implementation of asset management and related costs. According to these officials, the agency did not receive enough responses to analyze and include the data in the final reports. EPA officials representing the clean water needs assessment survey said that they would be open to more discussions with states about asset management given their increased awareness. An EPA official that works with the drinking water needs assessment survey told us that EPA has not systematically asked questions about asset management in the drinking water needs assessments survey because the agency has determined that its efforts are best focused on asking questions required to determine infrastructure needs and for which the agency is likely to receive a large response from the states. In addition, this official said that EPA encourages water utilities to implement asset management practices through its guidance to states about the types of information the agency will accept as support for responses to the survey. An EPA official said that the agency has not considered or determined what asset management questions it might ask on the drinking water needs assessment survey. By continuing to include questions on the clean water needs assessment and considering questions about water utilities’ use of asset management to include in the drinking water needs assessment survey, EPA may have better assurance that it is collecting information in a cost-effective way to assess the effectiveness of its asset management training efforts with USDA. The EPA official representing the drinking water needs assessment survey said that there would be value for the agency in asking water utilities about asset management. Second, EPA and USDA do not collect information on the benefits and costs of using and implementing asset management to encourage use. EPA and USDA do not—and are not required to—compile information on the benefits and costs of implementing asset management to encourage small utilities to use it. EPA and USDA officials stated that increasing water and wastewater utilities’ use of asset management increases the utilities’ managerial and financial capacity and, for this reason, EPA and USDA (through technical assistance providers) share anecdotal data to encourage water utilities to adopt asset management. In particular, these officials said that they promote managerial and financial capacity building trainings, which include asset management training, by attending conferences at which they present to water utility officials, provide one- on-one technical assistance outreach, and engage in conversations with state SRF and USDA officials. EPA also provides some information about the potential benefits of asset management in documents available on its website. For example, EPA’s 2008 best practices guide provides a list of the benefits of using asset management. EPA and USDA’s training materials for their workshops on Rural and Small Systems Guidebook to Sustainable Utility Management include examples of management challenges and best practices— many of which are asset management practices—to address the challenges. The training materials do not, however, include cases showing communities’ use of asset management and the resulting benefits, including the costs that could result from implementing these best practices. According to an EPA official, most of the information on benefits, including cost savings, and costs comes from specific anecdotes and materials that technical assistance providers have developed and conveyed through their individual training and interactions with water utilities’ staff or governing bodies. Some of EPA’s technical assistance providers, such as the EFCs, have used information on the benefits of asset management to encourage water utility board members and city councils to adopt asset management. However, the agencies have not compiled information about the benefits and costs of asset management into a single document that is more broadly available to water utilities. EPA and USDA officials told us that they had not considered compiling information about the benefits of asset management into one source, and they are not required to. However, providing information on benefits and costs to those who have not attended the agencies’ trainings could help encourage them to adopt asset management practices. We noted in a September 2002 report that agencies use information dissemination as one of several tools to achieve goals for programs in which agencies do not act directly, but inform and persuade others to act to achieve a desired outcome. Additionally, in a September 2005 report, we reported that agencies can evaluate their efforts using fact-based understandings of how their activities contribute to accomplishing the mission and broader results and to identify and increase the use of program approaches that are working well. In this same report, we stated that agencies can adopt a number of practices to enhance the usefulness of information. One of these best practices for improving the use of information is to ensure that it is, among other things, relevant, accessible, and easy to use. For example, in 2006, EPA and the Federal Highway Administration collaboratively conducted a case study review of communities’ experiences, including the benefits, of implementing asset management across multiple infrastructure sectors such as water, wastewater, and transportation. According to the study, the purpose of the review was to provide, in one resource, relevant examples of how communities were responding to their infrastructure needs by using asset management practices. An EPA official told us that, in 2006, the agency considered compiling information on cost savings by asking a question in a potential nationwide study of water utilities’ use of asset management but did not pursue this study because, among other things, the agency did not have the resources to pay for it at that time. This official told us that EPA has instead engaged in activities that cost less than a nationwide study, such as developing case studies or other small-scale information collection efforts. A USDA official said that the agency would be open to exploring ways to collaborate on a study of the benefits and costs of asset management for small water utilities. As shown by the EPA and Federal Highway Administration study, EPA provided useful examples of the benefits for entities considering asset management by compiling and making broadly available information about the benefits and costs of asset management. Consistent with best practices for using performance information, compiling the information that EPA and USDA technical assistance providers share with water and wastewater utilities to document the benefits and costs of asset management could provide a resource for a broader audience of small water utilities that are considering using this approach. Both EPA and USDA officials said that they had developed several materials for small water utilities through their coordinated efforts and that a compilation of existing cases and examples of communities’ use of asset management, its benefits, including cost savings, and costs could be useful in persuading some water utilities to use asset management. EPA and USDA have provided millions of dollars of federal funding to help small water utilities increase their technical, managerial, and financial capacity to better meet the challenge of repairing and replacing the nation’s aging water infrastructure and, also, provide safe and clean water to communities. Both agencies have identified asset management as a tool that water utilities can use to increase their ability to address current and future infrastructure needs. EPA and USDA have played a significant role in encouraging and helping water utilities to implement asset management through funding conditions, training, and other resources. With 68,000 water utilities across the country, it is important to know which are using asset management and which are not. However, EPA and USDA do not collect information on water utilities’ use of asset management, particularly from utilities that have taken part in agency training sessions on asset management. Existing data collection efforts such as EPA’s needs assessment surveys may be a cost-effective means of doing this. By continuing to include questions in the clean water needs assessment survey and considering questions about water utilities’ use of asset management to include in the drinking water needs assessment survey, EPA may have better assurance that it is collecting information in a cost- effective way to assess the results of its asset management training efforts with USDA. In addition, persuading elected officials and communities of the need for infrastructure investment is important, as is the need to use asset management to make investment decisions. EPA and USDA share the benefits and costs of asset management in various documents provided on their websites and through technical assistance providers. However, the agencies have not compiled the information on the benefits and costs of asset management, particularly the cost savings, into one source. A documented compilation of the benefits and costs of asset management, including the cost savings, consistent with performance management best practices, that is widely available to water utilities, may be helpful to EPA and USDA in encouraging a broader audience of small water utilities to consider adopting asset management. As EPA and USDA continue to consider ways to track and promote water utilities’ implementation of asset management, we recommend the following: First, that the Administrator of EPA direct the Office of Groundwater and Drinking Water and Office of Wastewater Management to continue to include questions on water utilities’ use of asset management in the clean water needs assessment and consider including questions about water utilities’ use of asset management in future drinking water infrastructure needs assessment surveys. Second, that the Administrator of EPA, and the Secretary of USDA, through the Rural Development Agency, consider compiling into one document the existing cases and examples of the benefits and costs of asset management and widely share this information with water utilities. We provided the Administrator of EPA and the Secretary of USDA with a draft of this report for review and comment. In written comments provided by EPA (reproduced in app. IV), EPA generally agreed with our findings and recommendations. In addition, in an e-mail received from the GAO and OIG Liaison Officer within USDA’s Rural Development agency, USDA agreed with our report. In response to our recommendation that EPA continue to include questions on water utilities’ use of asset management in the clean water needs assessment and consider including such questions in future drinking water infrastructure needs assessment surveys, EPA included “a significant caveat” to its agreement. Specifically, EPA’s comments stated that the agency generally agrees with the recommendation, with the significant caveat that the method for continuing to assess the effectiveness of the agency’s asset management training and technical assistance must be both effective and efficient. It further stated that although the agency has included asset management questions as part of the needs assessment surveys in the past, this mechanism has led to limited information regarding the level of implementation of asset management at utilities. Further, the comments stated that the needs assessment survey may not be the most efficient and effective way to collect these data since the survey’s primary focus and design is to assess and quantify the nation’s infrastructure need and not the adequate implementation of asset management. EPA stated that it would be willing to explore other means of obtaining data that would provide an indication of how utilities are benefitting from the agency’s asset management training and technical assistance. We continue to consider the needs assessment survey to be a cost-effective and efficient method for collecting data from water utilities. If EPA explores others approaches and finds that the information can be systematically collected from the nation’s water utilities for comparison over time, we agree other approaches could be appropriate. In response to our recommendation that EPA and USDA consider compiling, and widely share, existing cases and examples of the benefits and costs of asset management, EPA noted that it agrees that it is important to educate utilities on the benefits of asset management in protecting the nation’s infrastructure investment and described steps it has taken to do so. EPA stated that, as funding and resources allow, the agency would most likely consider the development of a case study compilation document focused on local decision makers who are key to ensuring that asset management is a priority and is implemented appropriately. We agree that focusing on local decision makers is important and believe that a document compiling case studies could be useful and made available to water utilities as well as local decision makers. USDA did not comment specifically on this recommendation and stated in its e-mail message that the agency will continue to emphasize asset management through its technical assistance providers funded through the agency’s Technical Assistance and Grant program and joint EPA/USDA Work-Shop-in-a-Box initiative. We are sending copies of this report to the appropriate congressional committees; the Administrator of EPA; the Secretary of Agriculture; the Director, Office of Management and Budget; and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or gomezj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Federal law does not require water utilities to implement asset management. However, in 2014, federal law began requiring all recipients of Clean Water State Revolving Fund (SRF) loans for repair, replacement, or expansion of a water utility to develop fiscal sustainability plans. According to the law, fiscal sustainability plans should include (1) an inventory of critical assets; (2) an evaluation of the condition and performance of such assets or groups of assets; and (3) a plan for maintaining, repairing, and, as necessary, replacing the utility and a plan for funding such activities. According to Environmental Protection Agency (EPA) officials, some of the activities required as part of the fiscal sustainability plan are asset management practices. Officials in one state told us their state requires water utilities to develop asset management plans as a condition of SRF loans for water infrastructure, and officials in another state told us their state requires asset management plans for SRF loan forgiveness. Other states may provide incentives during the application process. Specifically, states may award additional points in the application scoring process known as “priority points.” Table 1 provides information about the requirements and incentives reported by officials in the states we reviewed. This report examines selected water utilities’ use of asset management. Our objectives were to examine (1) what is known about the use of asset management among the nation’s water utilities—particularly small water utilities—including benefits and challenges, if any, for water utilities implementing asset management and (2) steps, if any, that the Environmental Protection Agency (EPA) and U.S. Department of Agriculture (USDA) are taking to help small water utilities implement asset management. To examine what is known about water utilities’ use of asset management, including benefits and challenges, if any, we used EPA’s framework for asset management. The framework, from EPA’s 2008 Asset Management: A Best Practices Guide, describes five components and the practices that comprise asset management. The components and practices described in this document formed the basis of our interview questions. We also used other EPA documents describing the agency’s asset management framework: 2003 Asset Management: A Handbook for Small Water Systems: One of the Simple Tools for Effective Performance (STEP) Guide Series and 2008 Building an Asset Management Team. This criterion remains relevant today because it is a federally-developed asset management framework for water utilities. To determine if there were existing sources of information on the extent to which utilities use asset management, we interviewed EPA and USDA Rural Development staff about data each agency collects on water utilities’ use of asset management, including any data on asset management collected by EPA in its national needs assessment surveys. We also interviewed representatives of national water and wastewater associations to identify potential data sources. Through this process, we identified one national study, a market research study conducted by McGraw-Hill Construction, a company that provides analytics, news, and intelligence for the North American construction industry, and CH2M, a company that, among other things, provides consulting services related to asset management. The report described the results of a survey of 451 persons representing water utilities in the United States and Canada and the extent to which they had adopted 14 asset management practices. Thirty percent of the 451 persons in the survey sample represented utilities providing only drinking water services and 70 percent represented utilities providing drinking water and wastewater services. The 14 practices included such actions as (1) the use of a computerized maintenance management system, (2) the use of an asset register to facilitate analysis and planning, (3) the development of customer service and asset service-level performance measures, and (4) consideration of risks and consequences of alternative investment/ budget decisions, but the report did not describe how the 14 practices were selected. The report also included information from confidential interviews with water utilities on their insights in implementing asset management. We reviewed the authors’ description of the study’s methodology and determined that the data were sufficient for the purpose of describing qualitative information to corroborate information we obtained from our interviews about large utilities’ use of asset management because the study authors included asset management practices that were similar to those identified by EPA. To understand the extent to which small utilities are using asset management, we conducted semistructured interviews with officials in a nonprobability sample of 10 states: Arizona, Arkansas, Delaware, Idaho, Iowa, Minnesota, Mississippi, New York, Vermont, and Wyoming. To select these 10 states, we identified the state in each of EPA’s 10 regions with the highest percentage of small water utility needs, using EPA’s most recent needs assessment data from the 2011 Drinking Water Infrastructure Needs Survey and Assessment and the 2008 Clean Watersheds Needs Survey. EPA’s 2011 Drinking Water Infrastructure Needs Survey and Assessment calculated the need among small water utilities serving fewer than 10,000 people for 35 of the 50 states. For the remaining 15 states, EPA provided data on small water utilities serving fewer than 3,300 people. We calculated the percentages of small water utilities’ share of statewide need for these 15 states using this information. The 2008 Clean Watersheds Needs Survey calculated the need for small water utilities serving fewer than 10,000 people for 47 states, the District of Columbia, and U.S. territories. It did not report data for 3 states, Alaska, North Dakota, or Rhode Island. As a result, we did not include these states in the data we used for our selection. In our sample of 10 states, we used a standard set of questions for conducting interviews, by telephone, with state drinking water and clean water State Revolving Fund (SRF) program officials and USDA state office staff. Our standard set of questions consisted of closed- and open- ended questions to ensure we consistently captured officials’ responses. During these interviews, we asked officials to estimate the use of asset management practices by water utilities’ in their state, the benefits for utilities and lenders of using asset management, the challenges small utilities experience in implementing asset management, funding and technical assistance for asset management available to water utilities in their state, and the asset management practices for which small utilities are most in need of technical assistance. In addition, we asked USDA officials about the loan conditions they consider to be asset management practices. We also specifically asked SRF officials about requirements for asset management practices or plans as a condition of SRF loans. In these interviews with officials representing state SRF programs, USDA state offices, and small water utilities, we did not receive answers to every close-ended question we asked; we note in the report the number of answers provided for each question. Because our sample of states was a nonprobability sample, responses from the officials we interviewed cannot be generalized to other states and their water utilities, but they illustrate some of the uses of asset management practices among small water utilities in states with the greatest infrastructure needs. In addition to interviewing state SRF and USDA officials in the 10 selected states, we interviewed, by telephone, officials in a nongeneralizable, random sample of small drinking water and wastewater utilities serving populations of 10,000 or less. To select these small water utilities, we used two EPA databases of water utilities. To identify drinking water utilities, we used EPA’s Safe Drinking Water Information System, a database of information about drinking water utilities and their regulatory violations. To identify wastewater utilities, we used a publicly available database of the water utilities included in the 2008 Clean Watersheds Needs Survey. We assessed the data reliability for both databases by, among other things, reviewing published documents and data regarding EPA’s quality assurance and quality control procedures for the needs survey assessment tools, contacting EPA officials to ensure we used the correct search fields and parameters, and reviewing past GAO reports and other documentation on the reliability of the data. Through these steps, we determined that the data were sufficiently reliable for our purposes of sampling drinking water and wastewater utilities in each of our 10 states. We selected a sample of 40 utilities, two drinking water and two wastewater utilities in each of the 10 selected states, and conducted interviews with at least one drinking water and one wastewater utility in 9 of the 10 states, for a total of 25 water utility interviews. (All of the water utilities we contacted in Mississippi declined to participate or did not respond to our interview requests.) To ensure our sample represented a range of small water utilities, we selected one drinking water utility serving a population of more than 500, but less than or equal to 10,000 and one drinking water utility serving a population of 500 or fewer people. Similarly, for each of the 10 states we selected one wastewater utility serving a population between 1,000 and 10,000 and one wastewater utility serving a population of less than 1,000 in each of the 10 selected states. In addition to the 40 utilities, we also generated a back-up list of randomly selected water utilities from which to choose if the utilities in the original sample declined to participate in our review or did not respond to our requests for an interview after three or more attempts. In total, we contacted 68 water utilities in all 10 of the states and conducted interviews with officials representing 25 water utilities in 9 states. Of these 25, 12 were drinking water utilities, and 13 were wastewater utilities. Table 2 provides a summary of the population served and ownership for the water utilities we interviewed in the 10 states. Our interviews with small water utilities consisted of a standard set of closed- and open-ended questions. Officials participating in these interviews were, for example, water utility operators or superintendents, maintenance staff, public works directors, elected city officials, water utility board members, and engineers. We asked about officials’ familiarity with asset management as defined by EPA, the extent to which they were implementing asset management practices, and, if so, the costs and cost savings they had identified, whether water utility staff or governing officials had received technical assistance on asset management, and contextual background on the community in which they served. Because our sample of water utilities in the 10 selected states was a nongeneralizable sample, we do not use the data collected from these states to generalize about the use of asset management in other states and by other water utilities. To analyze the open-ended questions in our surveys, we conducted several content analyses. Specifically, we conducted a content analysis to categorize the benefits (to water utilities and federal agencies) of asset management and the challenges small utilities face in implementing asset management practices. To identify categories in which to classify the open-ended responses, we examined the responses and used content analysis software to count the words officials used most frequently and identified broad groupings of concepts. We classified the responses to the open-ended questions on benefits to water utilities into the following categories: (1) planning, (2) financial, (3) awareness (of the system and assets), (4) management, (5) technical, (6) other benefits, and (7) unaware of benefits. We classified the responses to the open-ended questions on benefits to federal agencies into the following categories: (1) system and (2) lender. We classified the responses to the open-ended questions on challenges into the following categories: (1) financial, (2) human resources, (3) support, (4) education, and (5) other challenges. Where appropriate, we also identified subcategories to classify responses. To conduct the content analysis of responses, two analysts independently assigned officials’ responses to one or more categories and compared their analyses. All initial disagreements regarding the categorizations of officials’ responses were discussed and reconciled. The analysts then tallied the number of responses in each category. We tabulated the responses from closed-ended questions counts. To characterize officials’ views we identified throughout this report, we defined modifiers to quantify officials’ views. For example, “most” represents instances in which at least one state official in more than five states provided a response. The modifiers are as follows: For state SRF and USDA state offices in the 10 selected states, “most” represents state SRF and USDA officials in more than 5 states, “half” represents state SRF and USDA officials in 5 states, and “some” represents state SRF and USDA officials in less than 5 states. We also conducted two in-person visits to Maine and New Mexico. We selected Maine and New Mexico based on recommendations from EPA and USDA officials in the agencies’ headquarters, national water and wastewater associations, and technical assistance providers. In addition, we selected New Mexico because of the state’s requirement that water utilities have an asset management plan as a condition of Clean Water SRF infrastructure funding. The officials that recommended Maine generally told us they did so because the state has long encouraged utilities to adopt asset management. During our visits to Maine and New Mexico, we interviewed a total of 12 small and large, public and private, water utilities. We selected these water utilities based on recommendations from state SRF officials and technical assistance providers in each state and, in the case of New Mexico, EPA regional office staff. Table 3 provides descriptive information about the water utilities we interviewed in Maine and New Mexico. To examine the steps, if any, that EPA and USDA have taken to help small water utilities implement asset management, we reviewed EPA and USDA guidance, reports, training materials, and software tools available on asset management. One particular guidance we used was the 2011 Memorandum of Agreement on Promoting Sustainable Rural Water and Wastewater Systems, which describes EPA’s and USDA’s joint efforts to promote the technical, managerial, and financial capacity of small utilities and includes an emphasis on promoting asset management. We then interviewed EPA and USDA officials to understand the actions they have taken, the funds they have spent, and the efforts they have made to coordinate on asset management activities. We also interviewed technical assistance providers funded by EPA and USDA to conduct trainings and one-on-one technical assistance on asset management. These technical assistance providers included the EPA-funded Environmental Finance Centers at the University of New Mexico, Wichita State University, Cleveland State University, and University of North Carolina at Chapel Hill and the National Rural Water Association and Rural Community Assistance Partnership’s national offices and local affiliates in Maine and New Mexico. We compared the information we collected about the steps EPA and USDA have taken to key practices related to federal agencies’ training efforts and collection and dissemination of information that we identified in previous reports. We conducted this performance audit from January 2015 to January 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Environmental Protection Agency (EPA) and the U.S. Department of Agriculture (USDA) have funded asset management activities through their existing programs and, more recently, used some of this funding for collaborative efforts. Specifically, EPA officials told us that they first began funding asset management activities through the Drinking Water State Revolving Fund (SRF) program when the 1996 amendments to the Safe Drinking Water Act authorized states to use a certain percentage of their grants for such programs. According to USDA officials, USDA has contracted with national organizations that incorporated asset management training as part of their work in assisting small water utilities in managing and operating their facilities. With the 2011 issuance of a memorandum of agreement on sustainable infrastructure, EPA and USDA agreed to collaborate on training and to coordinate agency activities and financial assistance in areas that would increase technical, managerial, and financial capacity, including through the use of asset management. As a result, EPA and USDA spending on asset management activities falls under various larger programmatic areas. EPA’s spending falls into the following three categories: Small Systems Training and Technical Assistance Grants. EPA funds asset management activities through this grant program to improve technical, managerial, and financial capacity of small drinking water and wastewater utilities. In fiscal year 2014, EPA provided an estimated $13.1 million in grants to support training and technical assistance for small utilities. Of the estimated $13.1 million, about $3 million was used to provide training and technical assistance to water utilities to improve financial and managerial capacity, which included asset management. In December 2015, legislation was enacted to reauthorize the small systems training and technical assistance grants program for $15 million per year for fiscal years 2015 through 2020. Drinking Water SRF. EPA also funds asset management training and technical assistance through the Drinking Water SRF. Subject to certain limitations, states may reserve a portion of these grants to fund various activities including training and technical assistance. States can spend 2 percent of their SRF grants to provide small water utilities with technical assistance; up to 4 percent for state program administration and technical assistance to water utilities of any size; up to 10 percent for the development of technical, managerial, and financial capacity, operator certification programs, and other activities; and up to another 15 percent for a variety of activities that can also include programs to develop technical, managerial, and financial capacity. Examples of assistance that states can provide with these funds include such activities as written guidance, one-on-one coaching, and online and classroom training that can include asset management. Environmental Finance Center Grant Program. EPA also funds asset management activities through its Environmental Finance Center Grant Program. EPA officials said that in fiscal year 2014 they provided a total of $1 million in grants to selected public, private universities or colleges, and nonprofit organizations to provide technical assistance to communities on a range of EPA priorities, including improving financial capacity. This assistance included one- on-one technical assistance, workshops and other classroom trainings, and written guidance. EPA’s Environmental Finance Center at the University of New Mexico has been working on asset management since 2003 and has provided training and technical assistance on the use of asset management since 2006. USDA funds asset management activities through two programs: Water & Waste Disposal Technical Assistance & Training Grants. This program provides grants to various nonprofit organizations for technical assistance on managerial topics, assistance with preparing loan applications, and helping water utilities to find solutions to problems in operating their facilities. USDA provides the grant funds for 1 year. In fiscal year 2014, USDA provided an estimated $19 million to nonprofit organizations. Nonprofit organizations can apply to provide services to one state or multiple states. USDA gives priority to certain types of applicants, including those that serve communities of fewer than 5,500 or fewer than 2,500, and those that will primarily provide ‘‘hands on’’ technical assistance and training to water utility managers and operators experiencing problems with operations and maintenance or management. Circuit Rider Program—Technical Assistance for Rural Water Systems. Under this program, the National Rural Water Association—a training and technical assistance organization serving small communities—is contracted to provide staff in each of the 50 states with technical assistance on day-to-day operational, managerial, and financial issues. Specifically, according to the information the National Rural Water Association publishes on its website, staff known as “circuit riders” work on site with water utility personnel to troubleshoot problems, evaluate alternative technological solutions, recommend operational improvements, assist with leak detection, respond to natural disasters and other emergencies, provide hands-on training, participate in board and council meetings, and conduct user rate analyses. In fiscal year 2014, USDA provided about $15 million for the Circuit Rider Program. In addition to the individual named above, Susan Iott, Assistant Director; Mark Braza; Antoinette Capaccio; Bruce Crise; Tahra Nichols; and Alison O’Neill made key contributions to this report. In addition, Jon Melhus and Kiki Theodoropoulos made important contributions to this report. Water Infrastructure: Approaches and Issues for Financing Drinking Water and Wastewater Infrastructure. GAO-13-451T. Washington, D.C.: March 13, 2013. Water Infrastructure: Comprehensive Asset Management Has Potential to Help Utilities Better Identify Needs and Plan Future Investments. GAO-04-461. Washington, D.C.: March 19, 2004. Water Infrastructure: Information on Financing, Capital Planning, and Privatization. GAO-02-764. Washington, D.C.: August 16, 2002.
Recent catastrophic breaks in water mains and sewer discharges during storms are indicators of the nation's old and deteriorating water and wastewater infrastructure. EPA estimates that small water utilities—those serving fewer than 10,000 people--may need about $143 billion for drinking water and wastewater infrastructure repairs and replacement over 20 years. EPA and USDA provide the three largest sources of federal funding for water infrastructure. In a March 2004 report, GAO found that water utilities may benefit from implementing asset management—a tool used across a variety of sectors to manage physical assets, such as roads and buildings. GAO was asked to review water utilities' use of asset management. This report examines (1) what is known about the use of asset management among the nation's water utilities—particularly small water utilities— including benefits and challenges and (2) steps EPA and USDA are taking to help small water utilities implement asset management. GAO selected a nongeneralizable sample of 25 water utilities in 10 states based on largest infrastructure needs and interviewed EPA, USDA, state, and water utility officials. The small water utilities GAO reviewed in 10 selected states are implementing some asset management practices, although state officials said that large water utilities are more likely to implement asset management than small utilities. The asset management practices these small utilities used include identifying key assets, such as pipelines, treatment plants, and other facilities, and assessing their life-cycle costs. For example, officials from 23 of the 25 small water utilities GAO reviewed said they had maps that identify the location of at least some of their assets. However, of the 25 small water utilities, officials from 9 said they knew the cost of rehabilitation versus replacement for all of their assets. Officials from the Environmental Protection Agency (EPA), U.S. Department of Agriculture (USDA), and the 10 selected states identified benefits and challenges for small water utilities using asset management. The benefits that EPA, USDA, and state officials identified include cost savings and more efficient long-term planning. The key challenges these officials identified include the availability of funding to cover start-up and maintenance costs, the availability of human resources, information on how to implement asset management practices, and political support from elected officials to begin an asset management program or increase user rates. EPA and USDA are taking steps to help water utilities implement asset management by providing funding, free or low-cost tools such as software, one-on-one technical assistance, and classroom training for small water utilities that plan to implement asset management practices. EPA and USDA collect feedback from training participants, but do not collect information that will help track the results of the agencies' training efforts (e.g., whether utilities participating in such training implemented asset management practices). GAO identified in a March 2004 guide that evaluating training programs is key to ensuring training is effective in contributing to the accomplishment of agency goals and objectives. EPA officials told GAO that they had considered collecting nationwide data on water utilities' use of asset management but did not have the resources to pursue it. Leveraging existing data collection methods may be a cost-effective way for the agencies to collect this information. EPA conducts periodic needs assessment surveys of water utilities and has included questions about asset management use in the wastewater survey, but not in the drinking water survey. EPA officials said they did not receive enough responses to questions in the wastewater survey, and they have not considered including them in the drinking water survey. By continuing to include questions on wastewater utilities and considering questions about drinking water utilities' use of asset management in the surveys, EPA could have better assurance that it has information on the effectiveness of its training efforts with USDA. In addition, EPA and USDA officials told GAO that the agencies share anecdotal data on the benefits of asset management through technical assistance, but had not considered compiling such information into one document to encourage water utilities to adopt it. EPA and USDA are not required to compile such information, but doing so could provide information on benefits, including cost savings, and costs to water utilities that have not received training and could help encourage them to adopt asset management practices. GAO recommends that EPA consider collecting information about utilities' use of asset management through its needs assessment surveys, and that EPA and USDA compile the benefits of asset management into one document. EPA and USDA generally agreed with GAO's findings and recommendations.
SBIRS High is designed to contribute to four defense mission areas: missile warning, missile defense, technical intelligence, and battle-space characterization. (See app. II for a description of the program’s contribution to each.) SBIRS High is intended to replace the DSP satellite constellation, which has provided early missile warning information for more than 30 years, and to provide better and more timely data to the Unified Combatant Commanders, U.S. deployed forces, U.S. military strategists, and U.S. allies. As currently planned, SBIRS High will be comprised of four satellites in geosynchronous earth orbit (GEO), two infrared sensors that are to be placed on separate host satellites in highly elliptical orbit (HEO)—known as “HEO sensors”—and a ground segment for mission processing and control. These elements are illustrated in figure 1. The Air Force plans to acquire a fifth GEO satellite to serve as a spare that would be launched when needed. SBIRS High is intended to provide taskable sensors with improved sensitivity and revisit rate allowing them to see dimmer objects and provide more accurate estimates of missile launch and impact point than the sensors in the existing satellite constellation. SBIRS High sensors are also expected to view particular areas of interest and to revisit multiple areas of interest as directed by ground controllers. In addition to covering the shortwave infrared spectrum like their predecessor, SBIRS High sensors are also expected to cover midwave infrared bands and see-to-the- ground bands allowing them to perform a broad set of missions. SBIRS High is being developed in two increments. Increment 1, which achieved initial operational capability in December 2001, consolidated DSP and Attack and Launch Early Reporting to Theater ground stations into a single mission control station, which is currently operating using DSP data. Through spiral development, Increment 2 (now in the systems design and development phase) will develop the HEO sensors and first two GEO satellites and will upgrade Increment 1 hardware and software to operate and process data from the HEO and GEO elements. The remaining three GEO satellites are to be procured at some future date. Since the SBIRS program’s inception in 1996, it has been burdened by immature technologies, unclear requirements, unstable funding, underestimated software complexity, and other problems that have resulted in mounting cost overruns and delays. In addition, the program has been restructured several times. Most notably, in 1998, the SBIRS High Program Office had to restructure the program around an Air Force directive to delay the GEO satellite launches by 2 years in order to fund other DOD priorities. This contributed to program instability since the contractor had to stop and restart activities and devise interim solutions that would not otherwise have been required. In early 2001, there were growing cost and schedule variances and a related decrease in contractor management reserve funding. Primary drivers of these problems were technical issues with the HEO sensors and associated test failures. In November 2001, the Assistant Secretary of the Air Force (Acquisition) and the Executive Vice President of Lockheed Martin Space Systems Company formed the IRT—comprised of various specialists in acquisition, operations, engineering, and business management from industry and the federal government—to conduct a comprehensive, independent review of the SBIRS High program. In February 2002, the IRT issued a candid and critical report identifying three primary causes that led to the significant cost growth: The program was too immature to enter the system design and development phase. Program activation was based on faulty and overly optimistic assumptions about software reuse and productivity levels, the benefits of commercial practices, management stability, and the level of understanding of requirements. The complexity of developing engineering solutions to meet system requirements was not well understood by program and contracting officials. The systems integration effort was significantly underestimated in terms of complexity and the associated impacts. In addition, the requirements refinement process was ad hoc, creating uncertainty on the status of program priorities and affecting cost and schedule. Breakdown in execution and management. Overly optimistic assumptions and unclear requirements eventually overwhelmed government and contractor management. The 2-year delay of the GEO satellite launches, which occurred in 1998, contributed to management instability and was a factor in the Program Office and the contractor having to spend 25 of the first 60 months of the contract on replanning activities. The IRT also made a number of recommendations to address these problems. These included establishing accurate baselines for cost, schedule, and technology; revising the contract fee structure; and redefining Program Office and contractor management roles and responsibilities. A preliminary effort to capture a realistic estimate of total program costs conducted in the fall of 2001 suggested potential cost growth in excess of $2 billion, or a 70-percent program acquisition unit cost increase. A major defense acquisition program that incurs a unit cost growth of at least 25 percent in the acquisition program baseline triggers a statutory requirement that the Secretary of Defense certify to the Congress that four criteria have been met in order to continue the program—a process known as Nunn-McCurdy. See table 1 for a list of the criteria and the information DOD used to support certification for the SBIRS High program. Based on the information submitted to the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD (AT&L)), the SBIRS High program was officially certified on May 2, 2002, with the contingencies that the Air Force fully fund the program to the cost estimate developed by the Office of the Secretary of Defense (OSD) and to reestablish a baseline to OSD’s schedule for the GEO satellites. USD (AT&L) also directed that a revised acquisition strategy and program baseline be approved by the end of August 2002. These revisions and the new contract with Lockheed Martin Space Systems Company represent the most recent program restructuring. (App. III provides a chronology of key events in the development of SBIRS High.) In August 2002, the SBIRS High program was restructured to address a number of the problems that led to the Nunn-McCurdy breach. In implementing changes, the Air Force relied heavily on the findings and recommendations of the IRT. The restructuring increased program oversight and provided additional resources as well as incentives intended to improve contractor performance. As part of the program’s recertification after the Nunn-McCurdy breach, USD (AT&L) directed the Air Force to reestablish a baseline for the program’s cost and schedule estimates. The value of the restructured development contract increased by $2 billion to $4.4 billion. The first GEO satellite (GEO 1) launch was replanned from September 2004 to October 2006 and the GEO 2 launch from September 2005 to October 2007. The procurement start of GEO satellites 3 through 5 was replanned from fiscal year 2004 to fiscal year 2006. The SBIRS High budget for fiscal years 2006 and 2007 has identified funding for GEO satellites 3 through 5 totaling $1.3 billion—these satellites are not yet on contract. In addition to increased funding, the restructuring added 656 staff to the program— including increased staff for software development—bringing the total number of personnel to 2,305 by June 2003. Under the restructuring, DOD’s contract with Lockheed Martin was modified from a cost-plus-award fee structure to a cost-plus- award-and-incentive fee structure. The objective of this change was to encourage timely delivery of accepted capabilities by providing the incentive of the full potential profit or fee for the contractor. At the time of the restructuring, the Air Force believed the modified contract established an executable schedule, a realistic set of requirements, and adequate funding, and addressed the underlying factors that led to the Nunn- McCurdy breach. The restructured contract was planned around 10 “effectivities”— milestones at which an incremental system capability is delivered by the developer and accepted by the operator as shown in table 2. Delivery of these effectivities is tied to the contractor’s award and incentive fees. Lockheed Martin met the first effectivity and was awarded 100 percent of its fee (about $1.4 million). The restructured contract also prescribed tighter management controls, improved reporting of contractor information, and added formal review processes. For example, the modified contract removed Total Systems Performance Responsibility (TSPR) from the contractor, transferring more oversight back to the government because, according to the IRT, this concept was not properly understood or implemented within the SBIRS High program. This was evidenced by the numerous instances where the contractor was asked by program participants to accomplish work under TSPR guidelines without going through the appropriate management processes. In addition, since requirements were not prioritized or well-defined below the Operational Requirements Document (ORD) level, the contractor’s refinement of requirements was ad hoc, creating uncertainty on the status of program priorities and impacting cost and schedule. The restructuring also modified the program’s use of DOD’s Earned Value Management System (EVMS). Specifically, Lockheed Martin and its subcontractors standardized EVMS procedures in an effort to provide more accurate and up-to-date reporting on the status of the program. In addition, an EVMS oversight team was established to focus on process improvements, and Lockheed Martin and its subcontractors developed a surveillance plan to review the EVMS data. The contractor is now monitoring EVMS data more closely through monthly meetings and reviews of specific cost accounts. Changes to the reporting of EMVS data also help identify risks more effectively. The contractor and SBIRS High Program Office have also increased oversight and established a more formal risk management process within the restructuring. For example, the prime contractor placed three vice presidents in charge of the program as program director, deputy for ground segment development, and deputy for systems integration. In addition, the Air Force established a program management board consisting of high-level Air Force officials to prevent uncontrolled changes in the SBIRS High program. Risks are now monitored and reported during weekly risk management meetings. On a monthly basis, these risks are also discussed with government and contractor senior management. Finally, program officials reported that Lockheed Martin has employed a more structured software development process that focuses on building the software in increments, thereby helping to spread out risks. A vice president is now overseeing the ground segment development, including software development. Further, Lockheed Martin has reorganized the ground software development group under its Management and Data Systems, which is known for its software expertise. This component of Lockheed Martin achieved a Capability Maturity Model Integration (CMMI) level 5—the highest rating—for its software management and procedures. The ground software group does not have a formal CMMI rating—Lockheed Martin Management and Data Systems was brought in to help improve this group’s processes. While the new oversight processes under the restructured program should help managers identify and address problems as they arise, the restructuring does not fully account for earlier program decisions made without sufficient systems engineering and design knowledge. As a result, the program continues to experience problems and risks related to changing requirements, design instability, and software development concerns. In particular, design problems have delayed the delivery of the first HEO sensor (HEO 1). Because development of the GEO satellites and possible additional HEO sensors are tied to the completion of HEO 1, the schedules for the subsequent components could slip, continuing to put the program at significant risk of cost and schedule overruns. As we reported in June 2003, the majority of DOD satellite programs that GAO has reviewed over the past 2 decades, including SBIRS, have cost more than expected and have taken longer to develop and launch than planned because performance requirements were not adequately defined at the beginning of the program or were changed significantly once the program had already begun. The numerous changes to the SBIRS High requirements contributed to the cost and schedule overruns early in the program. Although a more defined requirements management process is now in place, changes to both the operational requirements and the contract are being proposed that could impact the program’s cost and schedule. Before the restructuring, a total of 94 requirements changes were made to the SBIRS High program—16 of which were added after the critical design review in August 2001. The effect that these changes may continue to have on the program was not addressed in the August 2002 restructuring efforts. Since restructuring, an Air Force program management board— which was established to oversee requirements changes and help ensure appropriate use of funds—has approved 34 actions that will require contract modifications. If funded, these changes, identified as “urgent and compelling,” would total $203.8 million and come from the Program Manager’s discretionary funds (also known as management reserve) or be paid by the user who needs the new capability. The majority of these dollars would be used to cover the following four changes earlier implementation of HEO mission processing in the mission control station at an estimated cost of $15 million, full implementation of the mission management component of HEO for the technical intelligence community at an estimated cost of $33 million, implementation and fielding of an operational mission control station backup to meet Increment 1 ITW/AA requirements in fiscal year 2006 at an estimated cost of $97 million, and the Army’s implementation of a capability for DSP M3Ps to receive and process HEO tracking data at an estimated cost of $27 million. In addition to these pending changes, the Air Force is considering acquiring a third and possibly a fourth HEO sensor and accelerating the procurement schedule for GEO satellites 3 through 5. If procured together, the estimated cost (including integration and testing) is $283 million for the third HEO sensor and $238 million for the fourth HEO sensor. The funding for these sensors has yet to be determined. The potential acceleration of the acquisition of GEO satellites 3 through 5 is similarly placing added pressures on the program. Plans to accelerate the acquisition of these GEO satellites is in response to a recent concern by the Senate Armed Services Committee that an Air Force decision to delay the acquisition of satellites 3 through 5 would create a 3-year gap between the launch of the second and third satellites. As a result, the committee directed the Air Force to develop a plan to reduce the production gap in the SBIRS High program from 2 years to 1. The committee also directed the Air Force to assess the program’s technical, schedule, and cost risks associated with a 2-year delay, compare the operational risk of a 1-year delay with a 2-year delay, and describe steps to mitigate the impact of a 1-year production gap. In April 2002, a group comprised of DOD subject matter experts reviewed the SBIRS High requirements and concluded that four operational requirements will not fully be met by the current design under certain scenarios. While these requirements are only 4 of 140, they are important to the system’s overall missile defense and warning capability: threat typing—the ability to identify a certain type of missile launched under certain scenarios; impact point prediction—the ability to predict where a particularly stressing theater-class missile will hit the earth; theater state vector velocity—the ability to track the path of a particularly stressing theater-class missile; and strategic raid count—the ability to count and discriminate the number of true incoming missiles for a certain scenario. Program officials said that these four requirements were poorly written, defined, or described in the ORD and that efforts are underway to rewrite, seek waivers, or clarify them and negotiate deviations with users. Achieving a stable design before entering product demonstration is critical to maintaining cost and schedule goals. However, at the SBIRS High critical design review—1 year before the restructuring—only 50 percent of design drawings were complete, compared to 90 percent as recommended by best practices. In addition, the IRT report found that the program did not invest enough time and resources in basic systems engineering analysis. Despite these problems, the program passed the critical design review. As a result, persistent problems with and changes to the design— especially of HEO 1—continue to impact the program’s cost and schedule. The HEO 1 sensor is the first major deliverable for Increment 2 and the only near-term deliverable to measure the program’s progress. As a part of the restructuring, the delivery of this sensor to the host satellite was delayed from its original date in February 2002 to February 2003. At that time, program officials were confident of meeting the new delivery date. However, significant deficiencies were revealed during systems tests in November 2002 making it apparent that the February 2003 date would not be met, and delivery was postponed another 2 months. At this writing, the first HEO sensor has yet to be delivered. In May 2003, the Program Director reported that the delays were due to a series of design deficiencies. For example, the design to control the sensor’s electromagnetic interference (EMI) was inadequate. Specifically, Lockheed Martin identified 148 offending EMI frequencies that exceeded the tolerances established by the host satellite. These excessive frequencies could interfere with the operations of the host satellite and jeopardize its mission. Thirty-nine design modifications to the HEO sensor were made, which eliminated 80 percent of these noise conditions. However, the final EMI test, completed in early July 2003, identified seven remaining EMI frequencies that were not within tolerance—two of which appear to be attributable to the HEO sensor. Since the problems cannot be resolved and there is no expected impact on performance, the Program Director requested waivers for the offending frequencies to allow the sensor to be integrated onto the host satellite. According to a program official, the waivers have been approved and the first HEO sensor is now expected to be delivered on December 6, 2003, provided no additional testing is needed. The Program Director reported that the HEO 1 design problems were attributable to weaknesses in earlier program management processes. Under these processes, the program tried to achieve efficiencies by cutting back on detailed design analyses and component testing. The exact costs associated with these weaknesses are unclear. Our independent estimate—using data from the contractor’s June 2003 cost performance report—indicates that the development of HEO 1 will overrun the contract amount at completion by about $25 million to $54 million, and that additional costs associated with HEO 2 rework would be between $20 million and $80 million. The Program Office is currently assessing estimates of total cost impact. Since the critical design review in August 2001, the Air Force also determined that two late design changes to the GEO satellites were necessary to improve the program’s chances of success. In January 2003, the Air Force directed the contractor to replace the 80 ampere-hour battery with a 100 ampere-hour battery to improve the satellites’ operational reliability. Program officials estimate that the new battery will cost about $15 million, but the June 2003 cost performance report shows that the contractor is having difficulty assessing and establishing specifications for the battery, which has resulted in schedule delays and could result in even greater costs. The second design change to the GEO satellites is to resolve a power deficiency by modifying the solar cell panel. The expected cost of this change has not yet been determined. In April 2002, 4 months before the restructuring, a report prepared by subject matter experts determined that while there were no significant technical barriers to eventually meeting the key requirements for SBIRS High, technology integration was a high risk owing to insufficient time. In restructuring the program, the Air Force implemented earlier integration and testing activities to mitigate this risk. However, we found that these mitigation measures may not be sufficient to avoid delays. For example, as of June 2003, the contractor has completed about 58 percent of the GEO sensor integration, assembly, test and checkout work, but it is still behind schedule with about $2 million of the planned work not yet accomplished. The development of software for the HEO sensors and GEO satellites (known as “flight” software) and the ground facility was a major factor that led to the Nunn-McCurdy breach. Despite the restructuring, the contractor and Program Office continue to report that software development underlies most of the top 10 program risks. Flight and ground software have already experienced difficulties, delaying delivery and putting program accomplishments at further risk. Most of the software for SBIRS High is for the ground stations to operate and command the satellites, process and display missile warning data, and perform mission management functions. Additional flight software is being developed for the HEO sensors and GEO satellites to control the infrared sensors and optical telescope and to process infrared data onboard the satellite. Another set of software elements will be used to test and simulate the performance of the SBIRS High system before it is put into operation. According to Lockheed Martin officials, the risks associated with the development of these software elements would be minimal because the majority of the software would be reused and modified. However, the risk associated with software development and reuse in Increment 1 was underestimated, which led to significant delays and cost overruns. This problem was not fully addressed by the restructuring and the time needed to develop the software continues to be underestimated. For example, in the current phase (Increment 2), delivery of the HEO flight software has been delayed because software item qualification testing—which was completed in May 2003 after a 3-month delay— revealed three deficiencies. One deficiency involved the HEO sensor’s ability to maintain earth coverage and track missiles while orbiting the earth. Delivery of the HEO ground software has also been delayed, and according to a program official, did not meet a revised delivery date of August 2003 because several ground software issues must still be resolved. While the problems encountered with the development of the flight and ground software have only resulted in delays of a few months, the delays signal weaknesses that could put the program at further risk of cost and schedule overruns. The remaining computer memory margin on the onboard satellites is also a concern. The SBIRS High program requirements mandate that the memory margin be at least 50 percent. This is to ensure there is sufficient remaining memory to accommodate future software code growth. However, inefficient coding for onboard satellite operations has resulted in an estimated current memory margin of 35 percent. Since rewriting the code would be too costly to the program, Lockheed Martin is requesting a waiver from this requirement to allow the 35-percent margin. According to DCMA officials, the HEO software delays are the result of an overly aggressive software development schedule and a lack of management’s understanding of the complexity of the software task. A program official stated the contractor’s software productivity and efficiency metrics have recently begun to reflect a negative trend in the program due to the delays in software development and increases in software defects. These officials stated that the program suffered from a lack of skilled computer personnel with infrared space systems knowledge. After the August 2002 restructuring, DCMA officials stated that Lockheed Martin committed more personnel and approved overtime when necessary to achieve schedules and has been cooperative in making changes recommended by DCMA and the SBIRS High Program Office. Although these actions should improve the schedule status, they will have a negative cost impact because of the additional resources that will need to be committed to recover and meet the program’s future schedule. Delays in the development and delivery of the HEO 1 sensor will likely have long-term consequences for the remainder of the program. According to DOD officials, until tasks leading to HEO message certification are complete, the program will not have “turned the corner” to achieving its objectives. However, some schedule milestones for these tasks have begun to slip due to problems in developing the HEO 1 sensor. As a result, the HEO message certification milestone, scheduled for November 2004, will slip 5 months or more. Program officials stated that they are coordinating the delivery of HEO 1 and the host satellite to mitigate any schedule impacts, but they agreed that these delays put the remaining SBIRS High schedule at risk. For example, the continuing HEO 1 sensor and software work is now competing for staff and other resources dedicated to HEO 2 and GEO tasks. As a result, the HEO 2 sensor and the first GEO satellite are unlikely to maintain their current development and launch schedules already revised under the restructuring. Program officials now estimate the HEO 2 sensor delivery will be delayed from February 2004 to June 2004—or as much as a year later—to implement more in-depth modifications to correct EMI problems, as recommended by a technical review team. According to program officials, the development schedule for the first GEO satellite has sufficient margin—approximately 300 days—to avoid delays in the first GEO launch. However, delivery and integration of the GEO flight software—a high-risk effort—did not begin in August 2003 as scheduled. While DCMA officials report that they are monitoring Lockheed Martin’s progress to maintain the software development schedule, any delays will affect the entire GEO schedule and could jeopardize the delivery and launch of the first GEO satellite. In an attempt to avoid delays, the program has compressed schedules and implemented work-around plans. However, in compressing original schedules, the program creates other risks because the time allotted to test and analyze the software and to train personnel to operate the SBIRS High ground processing system has been significantly reduced. In addition, work-around plans to overcome delays, even if feasible, would be difficult and costly to accomplish. At the same time, valuable on-orbit information of the HEO sensor’s performance may not be available in a timely manner for the GEO development efforts. Since HEO and GEO have common components, including the infrared sensor subsystem, HEO on-orbit data would improve the knowledge base for GEO development. Increased cost is also a risk. Although the contractor forecasts that the contract will be within cost at completion, significant cost overruns are likely. In analyzing data from the contractor’s cost performance reports from February 2003 through June 2003, we found that the cumulative cost overrun increased by more than 800 percent, from approximately $3 million to approximately $31.7 million, due to the significant overtime worked over a number of months. Moreover, as the program works to accomplish the almost $40 million worth of planned work that is behind schedule, the negative cumulative cost variance of approximately $31.7 million will continue to grow. Specifically, we predict that at contract completion, the program will have a cost overrun ranging from roughly $80 million to $432 million. DCMA similarly predicts significant cost overruns—officials reported an estimated overrun ranging from $34 million to $210 million at completion and gave an overall assessment of “red” for the SBIRS High earned value management status. Finally, as the program works to remedy problems—particularly those associated with the HEO sensors—management reserves are diminishing. For fiscal year 2003, reserves have been depleted, and Air Force and program officials are concerned that fiscal year 2004 reserves are insufficient to address contingencies. As a result, some planned development tasks may be delayed to fiscal year 2005. The Program Director stated that the program is applying lessons learned from HEO 1 to the HEO 2 sensor, the first GEO satellite, and other parts of the program. The knowledge gained from correcting problems on HEO 1 will be necessary if the Air Force decides to procure additional HEO sensors and accelerate procurement of the third, fourth, and fifth GEO satellites. The Program Office is also assessing the overall program impacts from the HEO 1 delay but has yet to complete the analysis. DOD has invested billions of dollars in an effort to develop a system that will provide greater long-range detection capabilities than DSP, its current missile tracking system. Yet more than a year after the most recent restructuring, the SBIRS High program continues to experience problems that have existed since its inception: cost overruns, schedule delays, and performance limitations. While the Air Force has taken a number of actions as recommended by the IRT to improve program oversight, it has become increasingly evident that the underlying factors that led to the Nunn-McCurdy breach—particularly the lack of critical knowledge— continue to cause problems, and additional cost and schedule slips beyond the revised acquisition program baseline appear inevitable. Without sufficient knowledge to ensure that the product design is stable and meets performance requirements and that adequate resources are available, there is no assurance that technical problems—such as those experienced with the HEO 1 sensor—will not surface on other major program components once they go through systems integration and testing. Moreover, the inability of the Air Force and its contractor to deliver HEO 1 as scheduled has put into question whether the restructuring has provided the right mechanisms to achieve program objectives. If the Air Force continues to add new requirements and program content while prolonging efforts to resolve requirements that cannot be met, the program will remain at risk of not achieving within schedule its intended purpose—to provide an early warning and tracking system superior to that of DSP. Given the considerable investment yet to come, the Congress and the Secretary of Defense would benefit from an assessment of whether the Program Office and contractor are doing everything necessary and feasible to achieve program objectives and to minimize future cost and schedule growth and address the underlying factors that are causing these problems. Therefore, we recommend that the Secretary of Defense reconvene the IRT or similar independent task force with substantial program knowledge to provide an assessment of the restructured program and concrete guidance for addressing the program’s underlying problems. Such a review should include determining whether the SBIRS High development schedule is executable within current cost and schedule estimates in light of the recent HEO 1 delays and other risks (such as software development), program design is stable and sufficient to meet performance requirements, contractor’s software development procedures and practices have reached at least a CMMI level 3 in relation to the Software Engineering Institute’s standards, appropriate management mechanisms are in place to achieve intended pending requirements changes should be funded. We further recommend that the Secretary of Defense put in place a mechanism for ensuring that the knowledge gained from the assessment is used to determine whether further programmatic changes are needed to strengthen oversight, adjust current cost and schedule estimates, modify contract mechanisms, and address requirements changes. In commenting on a draft of this report, DOD agreed that another thorough review of the SBIRS High program is warranted, and that the results of this review should be used to bring about needed program changes. However, DOD only partially agreed with our recommendations because it would like the option to consider other approaches for assigning responsibility for conducting a review. Given the complexity of this program, we agree that the Secretary of Defense should have this flexibility. We have modified our recommendations accordingly. DOD also provided technical comments, which we have incorporated as appropriate. DOD’s written comments—provided by the Deputy Under Secretary of Defense for Policy, Requirements, and Resources within the Office of the Under Secretary of Defense for Intelligence—are reprinted in appendix I. To identify the key elements of the restructured SBIRS High program, we reviewed the program’s operational requirements document, acquisition program baseline, single acquisition management plan, cost analysis requirements description, technical reports, and status documents; the restructured contract with Lockheed Martin Space Systems Company; and Nunn-McCurdy certification documents. We discussed the restructured program with representatives from the SBIRS High Program Office, Space and Missile Systems Center, Los Angeles Air Force Base, El Segundo, California; Secretary of the Air Force, Space Force Enhancement, Washington, D.C.; Office of the Assistant Secretary of Defense, Networks and Information Integration, Washington, D.C.; Office of the Secretary of Defense, Director of Program Analysis and Evaluation, Washington, D.C.; Lockheed Martin Space Systems Company, Missile and Space Operations, Sunnyvale, California; and Lockheed Martin Management and Data Systems, Boulder, Colorado. We also discussed requirements and mission needs with officials from Air Force Space Command and U.S. Strategic Command (West), Peterson Air Force Base, Colorado Springs, Colorado and Air Force Headquarters, Directorate of Operational Capability Requirements, Space Capability, Arlington, Virginia. To determine the problems and potential risks relating to cost, schedule, and performance that are still facing the SBIRS High program, we reviewed technical reports and program briefings and held discussions with program and contractor officials regarding ongoing challenges. To gain an understanding of these challenges, we reviewed monthly acquisition reports, Air Force Space Command’s urgent and compelling needs lists, the contractor’s top program risks lists, and recent congressional language concerning delivery schedules. To determine the program’s ability to meet cost and schedule projections, we examined schedule and funding information for developing hardware and software. We compared information from the SBIRS High Program Office to other independent reports including those from the IRT, a commissioned technology review, and DCMA. We also reviewed the report from the Baseline Update-1, a formal program review, and other program assessment reports. In addition, we performed our own analysis of cost and schedule projections using Lockheed Martin’s 2003 cost performance report data. We discussed all of these issues with representatives from the SBIRS High Program Office; Lockheed Martin Space Systems Company, Missile and Space Operations; Lockheed Martin Management and Data Systems; Office of the Secretary of Defense, Director of Operational Test and Evaluation, Alexandria, Virginia; and the Defense Contract Management Agency, Sunnyvale, California. We performed our work from October 2002 through September 2003 in accordance with generally accepted government auditing standards. We plan to provide copies of this report to the Secretary of Defense, the Secretary of the Air Force, and interested congressional committees. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report please contact me at (202) 512-4841 or John Oppenheim at (202) 512-3111. Key contributors to this report are listed in appendix IV. Missile Warning: SBIRS High is expected to provide reliable, unambiguous, timely, and accurate missile warning information to the President of the United States, the Secretary of Defense, Unified Combatant Commanders, and other users. This mission includes both global and theater requirements to provide strategic and theater ballistic missile warning in support of passive defense and force posturing. Missile Defense: SBIRS High is expected to provide reliable, accurate, and timely information to defensive systems. This mission includes both strategic and theater functional requirements to enable active missile defense and attack operations against hostile forces. Technical Intelligence: SBIRS High is expected to provide reliable, accurate, and timely infrared target signature and threat performance data to warfighters, the intelligence community, weapon system developers, and other users. This data may be used for target classification and identification templates and algorithm development for SBIRS High operational missions. SBIRS High also monitors activities and provides information to policy makers and other users on observed military tactics, new foreign technology development, arms control compliance, and proliferation activities. Battle-space Characterization: SBIRS High provides reliable, accurate, and timely data to enhance situational awareness, non-ballistic missile threat warning, decision support, battle damage assessment and intelligence information (for land, sea, air, and space) for the Unified Combatant Commanders, Joint Task Force Commanders, and other users. Battle- space characterization applies the SBIRS High product to the immediate need of the warfighters. OSD issues the Space-Based Warning Summer Study. SBIRS is named an Air Force lead program for acquisition reform. U.S. Space Command SBIRS Capstone Requirements Document is validated by the Joint Requirements Oversight Council. SBIRS Single Acquisition Management Plan is approved. Air Force awards two pre-engineering and manufacturing development contracts to Hughes and Lockheed Martin teams. Changes to the SBIRS Capstone Requirements Document are validated by the Joint Requirements Oversight Council. SBIRS System Threat Assessment Report is validated. SBIRS is authorized to proceed to milestone II. Air Force awards one engineering and manufacturing development contract to Lockheed Martin. Construction begins on the Mission Control Station at Buckley Air Force Base, Colorado. SBIRS High preliminary design review is held. SBIRS System Threat Assessment Report is revalidated. DOD removes $150 million from the SBIRS High program to fund other DOD priorities and directs the delay of the GEO launches by 2 years. Based on the DOD directive, a joint estimate team reviews the program to determine an attainable and affordable program restructure. SBIRS System Threat Assessment Report is revalidated. SBIRS critical design review is held. SBIRS ground Increment 1 is certified. Secretary of the Air Force notifies Congress of the Nunn-McCurdy breach. SBIRS Low is transferred to Missile Defense Agency. SBIRS ORD is revalidated by the Joint Requirements Oversight Council for the Nunn-McCurdy review. IRT report is issued identifying the underlying causes for the cost growth that led to the Nunn- McCurdy breach. SBIRS High Acquisition Decision Memorandum is signed, certifying the program after the Nunn- McCurdy breach. Revised SBIRS High Single Acquisition Management Plan is approved. Construction begins on the Mission Control Station Backup at Schriever Air Force Base, Colorado. Revised SBIRS High contract with Lockheed Martin goes into effect. SBIRS High Acquisition Program Baseline (restructuring) is approved. Interim Mission Control Station Backup in Boulder, Colorado, is certified. Air Force Space Command identifies need for HEO 3 and possibly HEO 4. DCMA reports HEO 1 schedule slip. Air Force provides USD (AT&L) with SBIRS High program assessment. Assistant Secretary of Defense for Command, Control, Communications, and Intelligence issues memorandum to Air Force calling for another review in November 2003. In addition to those listed above, Maricela Cherveny, Steve Martinez, Karen A. Richey, Nancy Rothlisberger, Karen M. Sloan, Hai V. Tran, Dale M. Yuge, and Randolph S. Zounes made key contributions to this report.
In 1996, the Department of Defense (DOD) initiated the Space-Based Infrared System (SBIRS) to provide greater long-range ballistic missile detection capabilities than its current system. The initial SBIRS architecture included "High" and "Low" orbiting space-based components and ground processing segments. SBIRS has been technically challenging, and in October 2001, SBIRS Low was transferred from the Air Force to the Missile Defense Agency. The Air Force expected to field SBIRS High by 2004, but numerous problems have led to schedule overruns. In the fall of 2001, DOD identified potential cost growth of $2 billion. To determine the causes of the significant cost growth, DOD convened an Independent Review Team. In August 2002, the Air Force restructured the program to address the findings of the team's assessment. Our report (1) describes the key elements of the restructured program and (2) identifies problems and potential risks still facing the program. In an effort to get the SBIRS High program on track, the most recent program restructuring provided contractor incentives and oversight measures, as recommended by the Independent Review Team. Under the current contract, the prime contractor's award fees are now tied to the incremental delivery of specific system capabilities. DOD also modified the contract to prescribe tighter management controls, improve reporting of contractor information, and add formal review processes by DOD management. This increased oversight is intended, in part, to minimize further changes in requirements and improve management of software development, both of which have been particularly problematic. The restructuring also added funding and other resources to the program and extended the scheduled delivery of certain components. At the time of the restructuring, the Air Force believed the modified contract established an executable schedule, a realistic set of requirements, and adequate funding. However, the restructuring did not fully address some long-standing problems identified by the Independent Review Team. As a result, the program continues to be at substantial risk of cost and schedule increases. Key among the problems is the program's history of moving forward without sufficient knowledge to ensure that the product design is stable and meets performance requirements and that adequate resources are available. For example, a year before the restructuring, the program passed its critical design review with only 50 percent of its design drawings completed, compared to 90 percent as recommended by best practices. Consequently, several design modifications were necessary, including 39 to the first of two infrared sensors to reduce excessive noise created by electromagnetic interference--a threat to the host satellite's functionality--delaying delivery of the sensor by 10 months or more. Software development underlies most of the top 10 program risks, according to the contractor and the SBIRS High Program Office. For example, testing of the first infrared sensor revealed several deficiencies in the flight software involving the sensor's ability to maintain earth coverage and track missiles while orbiting the earth. Program officials stated that they are coordinating the delivery of the first sensor with the delivery of the host satellite to mitigate any schedule impacts, but they agreed that these delays put the remaining SBIRS High schedule at risk.
TSA is responsible for securing all modes of transportation while facilitating commerce and the freedom of movement for the traveling public. Passenger prescreening is one program among many that TSA uses to secure the domestic aviation sector. The process of prescreening passengers—that is, determining whether airline passengers might pose a security risk before they reach the passenger-screening checkpoint—is used to focus security efforts on those passengers that represent the greatest potential threat. Currently, U.S. air carriers conduct passenger prescreening by comparing passenger names against government-supplied terrorist watch lists and applying the Computer-Assisted Passenger Prescreening System rules, known as CAPPS rules. Following the events of September 11, and in accordance with the requirement set forth in the Aviation and Transportation Security Act that a computer-assisted passenger prescreening system be used to evaluate all passengers before they board an aircraft, TSA established the Office of National Risk Assessment to develop and maintain a capability to prescreen passengers in an effort to protect U.S. transportation systems and the public against potential terrorists. In March 2003, this office began developing the second-generation computer-assisted passenger prescreening system, known as CAPPS II, to provide improvements over the current prescreening process, and to screen all passengers flying into, out of, and within the United States. Based in part on concerns about privacy and other issues expressed by us and others, the Department of Homeland Security (DHS) canceled the development of CAPPS II in August 2004. Shortly thereafter, it announced that it planned to develop a new passenger prescreening program called Secure Flight. In contrast to CAPPS II, Secure Flight, among other changes, will only prescreen passengers flying domestically within the United States, rather than passengers flying into and out of the United States. Also, the CAPPS rules will not be implemented as part of Secure Flight, but rather the rules will continue to be applied by commercial air carriers. As of February 2006, TSA planned to operate Secure Flight on the Transportation Vetting Platform (TVP)—the underlying infrastructure (hardware and software) developed to support the Secure Flight application, including security, communications, and data management and, the Secure Flight application was to perform the functions associated with receiving, vetting, and returning requests related to the determination of whether passengers are on government watch lists. This application was also to be configurable—meaning that it could be quickly adjusted to reflect changes to workflow parameters. In May 2006, TSA officials stated that the agency was considering other approaches for integrating the Secure Flight TVP and application functions in a different configuration as part of rebaselining the program. In its rebaselining effort, this and other aspects of Secure Flight are currently being reviewed, and policy decisions regarding the operations of the program have not been finalized. As envisioned under Secure Flight, when a passenger made flight arrangements, the organization accepting the reservation, such as the air carrier’s reservation office or a travel agent, would enter passenger name record (PNR) information obtained from the passenger, which would then be stored in the air carrier’s reservation system. While the government would be asking for only portions of the PNR, the PNR data could include the passenger’s name, phone number, number of bags, seat number, and form of payment, among other information. Approximately 72 hours prior to the flight, portions of the passenger data contained in the PNR would be sent to Secure Flight through a secure network connection provided by DHS’s CBP. Reservations or changes to reservations that were made less than 72 hours prior to flight time would be sent immediately to TSA through CBP. Upon receipt of passenger data, TSA planned to process the passenger data through the Secure Flight application running on the TVP. During this process, Secure Flight would determine if the passenger data matched the data extracted daily from TSC’s Terrorist Screening Database (TSDB)— the information consolidated by TSC from terrorist watch lists to provide government screeners with a unified set of terrorist-related information. In addition, TSA would screen against its own watch list composed of individuals who do not have a nexus to terrorism but who may pose a threat to aviation security. In order to match passenger data to information contained in the TSDB, TSC planned to provide TSA with an extract of the TSDB for use in Secure Flight and provide updates as they occur. This TSDB subset would include all individuals classified as either selectees (individuals who are selected for additional security measures prior to boarding an aircraft) or no-flys (individuals who would be denied boarding unless they are cleared by law enforcement personnel). To perform the match, Secure Flight was to compare the passenger data, TSDB, and other watch list data using automated name-matching technologies. When a possible match was generated, TSA and potentially TSC analysts would conduct a manual review comparing additional law enforcement and other government information with passenger data to determine if the person could be ruled out as a possible match. TSA was to return the matching results to the air carriers through CBP. Figure 1 illustrates how Secure Flight was intended to operate as of February 2006. As shown in figure 1, when the passenger checked in for the flight at the airport, the passenger was to receive a level of screening based on his or her designated category. A cleared passenger was to be provided a boarding pass and allowed to proceed to the screening checkpoint in the normal manner. A selectee passenger was to receive additional security scrutiny at the screening checkpoint. A no-fly passenger would not be issued a boarding pass. Instead, appropriate law enforcement agencies would be notified. Law enforcement officials would determine whether the individual would be allowed to proceed through the screening checkpoint or if other actions are warranted, such as additional questioning of the passenger or taking the passenger into custody. Based on its rebaselining effort, TSA may modify this concept of operations for Secure Flight. As we testified in February 2006, TSA had not conducted critical activities in accordance with best practices for large-scale information technology programs. Further, TSA had not followed a disciplined life cycle approach in developing Secure Flight, in which all phases of the project are defined by a series of orderly phases and the development of related documentation. Program officials stated that they had instead used a rapid development method that was intended to enable them to develop the program more quickly. However, as a result of this approach, the development process had been ad hoc, with project activities conducted out of sequence. For example, program officials declared the design phase complete before requirements for designing Secure Flight had been detailed. Our evaluations of major federal information technology programs, and research by others, have shown that following a disciplined life cycle management process decreases the risks associated with acquiring systems. As part of the life cycle process, TSA must define and document Secure Flight’s requirements—including how Secure Flight is to function and perform, the data needed for the system to function, how various systems interconnect, and how system security is achieved. We found that Secure Flight’s requirements documentation contained contradictory and missing information. TSA officials acknowledged that they had not followed a disciplined life cycle approach in developing Secure Flight, but stated that in moving forward, they would follow TSA’s standard development process. We also found that while TSA had taken steps to implement an information security management program for protecting Secure Flight information and assets, its efforts were incomplete, based on federal standards and industry best practices. We reported that without a completed system security program, Secure Flight may not be adequately protected against unauthorized access and use or disruption, once the program becomes operational. Further, TSA had proceeded with Secure Flight development without an effective program management plan that contained up-to-date program schedules and cost estimates. TSA officials stated they had not maintained an updated schedule in part because the agency had not promulgated a necessary regulation requiring commercial air carriers to submit certain passenger data needed to operate Secure Flight, and air carrier responses to this regulation would impact when Secure Flight would be operational and at what cost. While we recognized that program unknowns introduce uncertainty into the program-planning process, uncertainty is a practical reality in planning all programs and is not a reason for not developing plans, including cost and schedule estimates that reflect known and unknown aspects of the program. Prior to TSA’s rebaselining effort of Secure Flight, several oversight reviews of the program had been conducted that raised questions about program management, including the lack of fully defined requirements. DHS and TSA had executive and advisory oversight mechanisms in place to oversee Secure Flight, including the DHS Investment Review Board— designed to review certain programs at key phases of development to help ensure they met mission needs at expected levels of costs and risks. However, the DHS Investment Review Board and other oversight groups had identified problems with Secure Flight’s development. Specifically, in January 2005, the Investment Review Board withheld approval of the TVP, which supported Secure Flight operations, to proceed from development and testing into production and deployment until a formal acquisition plan, a plan for integrating and coordinating Secure Flight with other DHS people-screening programs, and a revised acquisition program baseline had been completed. In addition, an independent working group within the Aviation Security Advisory Committee, composed of government, privacy, and security experts, reported in September 2005 that TSA had not produced a comprehensive policy document for Secure Flight that could define oversight or governance responsibilities, nor had it provided an accountability structure for the program. TSA has taken actions that recognize the need to instill more rigor and discipline into the development and management of Secure Flight, and suspended its development efforts while it rebaselines the program. This rebaselining effort includes reassessing program goals and capabilities and developing a new schedule and cost estimates. Although TSA officials stated that they will use a disciplined life cycle approach when moving forward with the rebaselined program, officials have not identified when their rebaselining effort will be completed. As we testified in February 2006, TSA had taken steps to collaborate with Secure Flight stakeholders—CBP, TSC, and domestic air carriers—whose participation is essential to ensuring that passenger and terrorist watch list data are collected and transmitted for Secure Flight operations, but additional information and testing are needed to enable stakeholders to provide the necessary support for the program. TSA had, for example, drafted policy and technical guidance to help inform air carriers of their Secure Flight responsibilities, and had begun receiving feedback from the air carriers on this information. TSA was also in the early stages of coordinating with CBP and TSC on broader issues of integration and interoperability related to other people-screening programs used by the government to combat terrorism. Prior to its rebaselining effort, TSA had conducted preliminary network connectivity testing between TSA and federal stakeholders to determine, for example, how information would be transmitted from CBP to TSA and back. However, these tests used only dummy data and were conducted in a controlled environment, rather than in a real-world operational environment. According to CBP, without real data, it was not possible to conduct stress testing to determine if the system could handle the volume of data traffic that would be required by Secure Flight. TSA acknowledged it had not determined what the real data volume requirements would be, and could not do so until the regulation for air carriers was issued and their data management role had been finalized. All key program stakeholders we interviewed stated that additional information was needed before they could finalize their plans to support Secure Flight operations. Although CBP, TSC, and air carrier officials we interviewed through January 2006 acknowledged TSA’s outreach efforts, they cited several areas where additional information was needed from TSA before they could fully support Secure Flight. Several CBP officials stated, for example, that they could not proceed with establishing connectivity with all air carriers until DHS published the rule—the regulation that would specify what type of information was to be provided for Secure Flight—and the air carriers submitted their plans for providing this information. In addition, a TSC official stated that until TSA provided estimates of the volume of potential name matches that TSC would be required to screen, TSC could not make decisions about required resources. TSA’s ongoing coordination of prescreening and name-matching initiatives with CBP and TSC could impact how Secure Flight is implemented and require stakeholders to alter their plans made to support the program. In January 2006, TSA officials stated that they are coordinating more closely with CBP’s international prescreening initiatives for passengers on flights bound for the United States. The Air Transport Association and the Association of European Airlines—organizations representing air carriers—had requested, among other things, that both domestic and international passenger prescreening function through coordinated information connections and avoid unnecessary duplication of communications, programming, and information requirements. In addition, TSC has an initiative under way to, among other things, better safeguard watch list data. At present, TSC exports watch list data to other federal agencies for use in their screening efforts or processes for examining documents and records related to terrorism. However, TSC is currently developing a new system, Query, whereby watch list data would not be exported, but rather would be maintained by TSC. Query would serve as a common shared service that would allow agencies to directly search the TSDB using TSC’s name-matching technology for their own purposes. If TSC chooses to implement Query, TSA may be required to modify the system architecture for Secure Flight in order to accommodate the new system. Due to delays in Secure Flight’s development and uncertainty about its future, officials from two air carriers told us after our February 2006 testimony that they were enhancing their respective name-matching systems because they were unsure when and whether TSA would be taking over the name-matching function through Secure Flight. While these efforts may improve the accuracy in each air carrier’s individual name-matching system, the improvements will only apply to their respective systems and could further exacerbate differences that currently exist among the various air carriers’ systems. These differences may result in varying levels of effectiveness in the matching of passenger names against terrorist watch lists, which was a primary factor that led to the government’s effort to take over the name-matching function through Secure Flight. As of February 2006, several activities were under way, or were about to be decided, that would affect Secure Flight’s effectiveness. For example, TSA had tested name-matching technologies to determine what type of passenger data would be needed to match against terrorist watch list data. These tests had been conducted using historical data in a controlled, rather than real-world environment, but additional testing was needed to learn more about how these technologies would perform in an operational environment. TSA also had not yet conducted stress testing to determine how the system would handle peak data volumes. Further, due to program delays and the program rebaselining, TSA had not conducted a comprehensive end-to-end testing to verify that the entire system would function as intended, although it had planned to do so by the middle of 2005. Prior to its rebaselining effort, we further reported that TSA had not made key policy decisions for determining the passenger information that air carriers would be required to collect, the name-matching technologies that would be used to vet passenger names against terrorist watch list data, and thresholds that would be set to determine the relative volume of passengers who are to be identified as potential matches against the database. For example, TSA will need to decide which data attributes air carriers will be required to provide in passenger data to be used to match against data contained in the TSDB, such as full first, middle, and last name plus other discrete identifiers, such as date of birth. Using too many data attributes can increase the difficulty of conducting matching, while using too few attributes can create an unnecessarily high number of incorrect matches due to, among other things, the difficulty in differentiating among similar common names without further information. In addition, TSA must determine what type or combination of name- matching technologies to acquire and implement for Secure Flight, as different technologies have different capabilities. For example, earlier TSA PNR testing showed that some name-matching technologies are more capable than others at detecting significant name modifications allowing for the matching of two names that contain some variation. Detecting variation is important because passengers may intentionally make alternations to their names in an attempt to conceal their identities. In addition, unintentional variations can result from different translations of non-native names or data entry errors. TSA had planned to finalize decisions on these factors as system development progressed. However, until TSA completes its program rebaselining, data requirements for the program will remain unknown. As we reported in February 2006, two additional factors will play an important role in the effectiveness of Secure Flight. These factors include (1) the accuracy and completeness of data contained in TSC’s TSDB and in passenger data submitted by air carriers, and (2) the ability of TSA and TSC to identify false positives and resolve possible mistakes during the data-matching process to minimize inconveniencing passengers. Regarding data quality and accuracy, in a review of the TSC’s role in Secure Flight, the Department of Justice Office of Inspector General found that TSC could not ensure that the information contained in its TSDB was complete or accurate. To address accuracy, TSA and TSC had planned to work together to identify false positives—passengers inappropriately matched against data contained in the terrorist-screening database—by using intelligence analysts to monitor the accuracy of data matches. Related to the accuracy of PNR data, we reported that TSA had planned to describe the required data attributes that must be contained in passenger data provided to TSA in a forthcoming rule. However, the accuracy and completeness of the information contained in the passenger data record will still be dependent on the air carriers’ reservations systems, the passengers themselves, and the air carriers’ modifications of their systems for transmitting the data in the proper format. Prior TSA testing found that many passenger data records submitted by air carriers were found to be inaccurate or incomplete, creating problems during the automated name- matching process. Prior to its rebaselining effort, TSA had also reported that it planned to work with TSC to identify false positives as passenger data are matched against data in the TSDB, and to resolve mistakes to the extent possible before inconveniencing passengers. The agencies were to use intelligence analysts during the actual matching of passenger data to data contained in the TSDB to increase the accuracy of data matches. When TSA’s name- matching technologies indicated a possible match, TSA analysts were to manually review all of the passenger data and other information to determine if the passenger could be ruled out as a match to the TSDB. If a TSA analyst could not rule out a possible match, the record would be forwarded to a TSC analyst to conduct a further review using additional information. Until TSA completes its rebaselining effort, it is uncertain whether this or another process will be used to help mitigate the misidentification of passengers. An additional factor that could impact the effectiveness of Secure Flight in identifying known or suspected terrorists is the system’s inability to identify passengers who assume the identity of another individual by committing identity theft, or who use false identifying information. Secure Flight was neither intended nor designed to address these vulnerabilities. TSA is aware of, and plans to address, the potential for Secure Flight to adversely affect travelers’ privacy and their rights. However, as we testified in February 2006, TSA, as part of its requirements development process, had not clearly identified the privacy impacts of the envisioned system or the full actions it planned to take to mitigate them. Because Secure Flight’s system development documentation did not fully address how passenger privacy protections were to be met, it was not possible to assess potential system impacts on individual privacy protections, as of February 2006. Further, such an assessment will not be possible until TSA determines what passenger data will be required and how privacy protections will be addressed in the rebaselined program. The Privacy Act and the Fair Information Practices—a set of internationally recognized privacy principles that underlie the Privacy Act—limit the collection, use, and disclosure of personal information by federal agencies. TSA officials have stated that they are committed to meeting the requirements of the Privacy Act and the Fair Information Practices. However, it is not evident how this will be accomplished because TSA has not decided what passenger data elements it plans to collect, how such data will be provided by stakeholders, or how a restructuring that may result from its program rebaselining will impact its requirements for passenger data. Prior to the rebaselining effort, TSA was in the process of developing but had not issued the systems-of-records notice required by the Privacy Act, or the privacy impact assessment required by the E-Government Act, that would describe how TSA will protect passenger data once Secure Flight becomes operational. Moreover, privacy requirements had not been incorporated into the Secure Flight system development process to explain whether personal information would be collected and maintained in the system in a manner that complies with privacy and security requirements. In our review of Secure Flight’s system requirements prior to TSA announcing its rebaselining, we found that privacy concerns were broadly defined in functional requirements documentation, which states that the Privacy Act must be considered in developing the system. However, these broad functional requirements had not been translated into specific system requirements. Until TSA determines the relevancy of these requirements and notices, privacy protections and impacts cannot be assessed. Further, Congress mandated that Secure Flight include a process whereby aviation passengers determined to pose a threat to aviation security may appeal that determination and correct erroneous information contained within the prescreening system. While TSA has not yet determined how it will meet this congressional mandate, it currently has a process in place that allows passengers who experience delays under the current prescreening conducted by air carriers to submit a passenger identity verification form to TSA and request that the agency place their names on a cleared list. If, upon review, TSA determines that the passenger’s identity is distinct from the person on a watch list, TSA will add the passenger’s name to its cleared list, and will forward the updated list to the air carriers. TSA will also notify the passenger of his or her cleared status and explain that in the future the passenger may still experience delays. Recently, TSA has automated the cleared list process, enabling the agency to further mitigate inconvenience to travelers on the cleared list. GAO has an ongoing review examining TSA’s redress process for assisting passengers misidentified under the screening program. According to TSA officials, no final decisions have been made regarding how TSA will address redress requirements, but information on the process will be contained within the privacy notices released in conjunction with the forthcoming regulation. In May 2006, Secure Flight officials stated that concerns for privacy and redress were being addressed as part of their rebaselining effort. TSA has recognized the challenges it faces in developing Secure Flight and has undertaken efforts to rebaseline the program. We believe this rebaselining effort is a positive step in addressing the issues facing the program. To make and demonstrate progress on any large-scale information technology program, such as Secure Flight, an agency must first adequately define program capabilities that are to be provided, such as requirements related to performance, security, privacy, and data content and accuracy. These requirements can then in turn be used to produce reliable estimates of what these capabilities will cost, when they will be delivered, and what mission value or benefits will accrue as a result. For Secure Flight, well-defined requirements would provide a guide for developing the system and a baseline to test the developed system to ensure that it delivers necessary capabilities, and would help to ensure that key program areas—such as security, system connectivity, and privacy and redress protections—are appropriately managed. When we reported on Secure Flight in March 2005, TSA had committed to take action on our recommendations to manage the risks associated with developing and implementing Secure Flight, including finalizing the concept of operations, system requirements, and test plans; completing formal agreements with CBP and air carriers to obtain passenger data; developing life cycle cost estimates and a comprehensive set of critical performance measures; issuing new privacy notices; and putting a redress process in place. When we testified in February 2006, TSA had made some progress in all of these areas, including conducting further testing of factors that could influence system effectiveness and corroborating with key stakeholders. However, TSA had not completed any of the actions it had scheduled to accomplish. In particular, TSA had not developed complete system requirements or conducted important system testing, made key decisions that would impact system effectiveness, or developed a program management plan and a schedule for accomplishing program goals. In conjunction with its rebaselining effort, TSA has taken actions that recognize the need to instill more rigor and discipline into the development and management of Secure Flight, including hiring a program director to administer Secure Flight and a program manager with information systems program management credentials. We support these efforts and believe that proceeding with operational testing and completing other key program activities should not be pursued until TSA demonstrates that it has put in place a more disciplined life cycle process as part of its rebaselining effort. Mr. Chairman, this concludes my prepared statement. I will be pleased to respond to any questions that you or other members of the committee have at the appropriate time. For further information about this testimony, please contact Cathleen Berrick, at 202-512-3404 or at berrickc@gao.gov, or Randolph C. Hite at 202-512-6256 or at hiter@gao.gov. Other key contributors to this statement were J. Michael Bollinger, Amy Bernstein, Mona Nichols Blake, Christine Fossett, and Allison G. Sands. Aviation Security: Significant Management Challenges May Adversely Affect Implementation of the Transportation Security Administration’s Secure Flight Program. GAO-06-374T. Washington, D.C.: February 9, 2006. Aviation Security: Transportation Security Administration Did Not Fully Disclose Uses of Personal Information during Secure Flight Program Testing in Initial Privacy Notices, but Has Recently Taken Steps to More Fully Inform the Public. GAO-05-864R. Washington, D.C.: July 22, 2005. Secure Flight Development and Testing Under Way, but Risks Should Be Managed as System Is Further Developed. GAO-05-356 Washington, D.C.: March 28, 2005. TSA’s Modifications to Rules for Prescreening Passengers. GAO-05-445SU Washington D.C.: March 28, 2005. Measures for Testing the Impact of Using Commercial Data for the Secure Flight Program. GAO-05-324 Washington, D.C.: February 23, 2005. Aviation Security: Improvement Still Needed in Federal Aviation Security Efforts. GAO-04-592T Washington D.C.: March 30, 2004. Aviation Security: Challenges Delay Implementation of Computer- Assisted Passenger Prescreening System. GAO-04-504T Washington, D.C.: March 17, 2004. Computer-Assisted Passenger Prescreening System Faces Significant Implementation Challenges. GAO-04-385 Washington, D.C.: February 12, 2004. A system of due process exists whereby aviation passengers determined to pose a threat are either delayed or prohibited from boarding their scheduled flights by TSA may appeal such decisions and correct erroneous information contained in CAPPS II or Secure Flight or other follow-on/successor programs. The underlying error rate of the government and private databases that will be used to both establish identity and assign a risk level to a passenger will not produce a large number of false positives that will result in a significant number of passengers being treated mistakenly or security resources being diverted. TSA has stress-tested and demonstrated the efficacy and accuracy of all search technologies in CAPPS II or Secure Flight or other follow- on/successor programs and has demonstrated that CAPPS II or Secure Flight or other follow-on/successor programs can make an accurate predictive assessment of those passengers who may constitute a threat to aviation. The Secretary of Homeland Security has established an internal oversight board to monitor the manner in which CAPPS II or Secure Flight or other follow-on/successor programs are being developed and prepared. TSA has built in sufficient operational safeguards to reduce the opportunities for abuse. Substantial security measures are in place to protect CAPPS II or Secure Flight or other follow-on/successor programs from unauthorized access by hackers or other intruders. TSA has adopted policies establishing effective oversight of the use and operation of the system. There are no specific privacy concerns with the technological architecture of the system. Legislative mandated issue (short title) TSA has, in accordance with the requirements of section 44903 (j)(2)(B) of title 49, United States Code, modified CAPPS II or Secure Flight or other follow-on/successor programs with respect to intrastate transportation to accommodate states with unique air transportation needs and passengers who might otherwise regularly trigger primary selectee status. Appropriate life cycle cost estimates and expenditure and program plans exist. The results discussed in this testimony are based on our review of available documentation on Secure Flight’s systems development and oversight, policies governing program operations, our past reports on the program, and interviews with Department of Homeland Security officials, TSA program officials and their contractors, and other federal officials who are key stakeholders in the Secure Flight program. Throughout our ongoing reviews of Secure Flight, we have reviewed TSA’s System Development Life Cycle Guidance for developing information technology systems and other federal reports describing best practices in developing and acquiring these systems. We also reviewed draft TSA documents containing information on the development and testing of Secure Flight, including concept of operations, requirements, test plans, and test results. We also reviewed reports from the U.S. Department of Justice Office of the Inspector General that reviewed the Secure Flight program and reports from two oversight groups that provided advisory recommendations for Secure Flight: DHS’s Privacy and Data Integrity Advisory Committee and TSA’s Aviation Security Advisory Committee Secure Flight Working Group. We interviewed senior-level TSA officials, including representatives from the Office of Transportation Threat Assessment and Credentialing, which is responsible for Secure Flight, and the Office of Transportation Security Redress, to obtain information on Secure Flight’s planning, development, testing, and policy decisions. We also interviewed representatives from the U.S. Customs and Border Protection and Terrorist Screening Center to obtain information about stakeholder coordination. We also interviewed officials from several air carriers and representatives from aviation trade organizations regarding issues related to Secure Flight’s development and implementation. In addition, we attended conferences on name-matching technologies sponsored by MITRE (a federally funded research and development corporation) and the Office of the Director of National Intelligence. This testimony includes work accomplished for our March 2005 report and our February 2006 testimony, and work conducted from February 2006 to June 2006 in accordance with generally accepted government auditing standards. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
After the events of September 11, 2001, the Transportation Security Administration (TSA) assumed the function of passenger prescreening--or the matching of passenger information against terrorist watch lists to identify persons who should undergo additional security scrutiny--for domestic flights, which is currently performed by the air carriers. To do so, TSA has been developing Secure Flight. This testimony covers TSA's progress and challenges in (1) developing, managing, and overseeing Secure Flight; (2) coordinating with key stakeholders critical to program operations; (3) addressing key factors that will impact system effectiveness; and (4) minimizing impacts on passenger privacy and protecting passenger rights. For over 3 years, TSA has faced challenges in developing and implementing the Secure Flight program, and in early 2006, it suspended Secure Flight's development to reassess, or rebaseline, the program. TSA's rebaselining effort is currently under way, and final decisions regarding the future direction of the program have not been made. In our most recent report and testimony, we noted that TSA had made some progress in developing and testing the Secure Flight program, but had not followed a disciplined life cycle approach to manage systems development or fully defined system requirements. We also reported that TSA was proceeding to develop Secure Flight without a program management plan containing program schedule and cost estimates. Oversight reviews of the program had also raised questions about program management. Secure Flight officials stated that as they move forward with the rebaselined program, they will be following a more rigorous and disciplined life cycle process for Secure Flight. We support TSA's rebaselining effort, and believe that the agency should not move forward with the program until it has demonstrated that a disciplined life cycle process is being followed. We also reported that TSA had taken steps to collaborate with Secure Flight stakeholders whose participation is essential to ensuring that passenger and terrorist watch list data are collected and transmitted to support Secure Flight. However, key program stakeholders--including the U.S. Customs and Border Protection, the Terrorist Screening Center, and air carriers--stated that they needed more definitive information about system requirements from TSA to plan for their support of the program. In addition, we reported that several activities that will affect Secure Flight's effectiveness were under way or had not yet been decided. For example, TSA conducted name-matching tests that compared passenger and terrorist screening database information to determine what type of passenger data would be needed for Secure Flight's purposes. However, TSA had not yet made key policy decisions that could significantly impact program operations, including what passenger data it would require air carriers to provide and the name-matching technologies it would use. Further, Secure Flight's system development documentation did not fully identify how passenger privacy protections were to be met, and TSA had not issued the privacy notices that described how it would protect passenger data once Secure Flight became operational. As a result, it was not possible to assess how TSA is addressing privacy concerns. Secure Flight officials stated that they plan to address privacy issues and finalize its redress polices in conjunction with rebaselining the program.
Our objective was to assess IRS’ performance during the 1996 filing season, including some of IRS’ initiatives to modernize its processing activities. To achieve our objective, we interviewed IRS National Office officials and IRS officials in the Atlanta, Cincinnati, and Kansas City service centers who were responsible for the various activities we assessed;interviewed staff from the Department of the Treasury’s Financial Management Service (FMS) about the use of lockboxes to process Form 1040 tax payments; analyzed filing season related data from various IRS sources, including its Management Information System for Top Level Executives; visited four walk-in assistance sites (two in Atlanta and one each in Kansas City, MO, and Mission, KS) to interview staff and taxpayers; visited two banks in Atlanta and St. Louis that were being used by IRS as lockboxes to process tax remittances and analyzed cost/benefit data related to IRS’ use of lockboxes; reviewed data on the results of and costs associated with IRS’ decision to allow filers of paper returns to request direct deposits of their refunds; reviewed data on IRS efforts to identify and resolve questionable refund reviewed computer system availability reports and periodically attended weekly operational meetings held by IRS’ Network and Operations Command Center in February, March, and April 1996; analyzed IRS’ toll-free telephone system accessibility data, telephone activity data for forms distribution centers, and accessibility reports for the IRS system (known as TeleFile) that enables some taxpayers to file their returns by telephone; reviewed data compiled by IRS, including the results of a user survey, on the performance of TeleFile; and reviewed relevant IRS internal audit reports. We did our work from January 1996 through September 1996 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Commissioner of Internal Revenue or her designated representative. On November 6, 1996, several IRS officials, including the Assistant Commissioner for Forms and Submission Processing, the National Director for Submission Processing, and the National Director for Customer Service (Planning and Systems), provided us with oral comments. Their comments were reiterated in a November 18, 1996, memorandum from the Acting Chief of Taxpayer Service. IRS’ comments are summarized and evaluated on pages 24 and 25. IRS also provided some factual clarifications that we have incorporated in the report where appropriate. Appendix I has data on 12 indicators that IRS uses to assess its filing season performance. These indicators relate to workload, such as the number of answered telephone calls from taxpayers who are seeking assistance; timeliness, such as the number of days needed to process returns or issue refunds; and quality, such as the accuracy of IRS’ answers to taxpayer questions and the accuracy with which IRS processes individual income tax returns and refunds. As shown in appendix I, IRS met or exceeded 11 of the 12 performance goals for the 1996 filing season and almost met the 12th goal (the number of forms-ordering calls answered). Two specific aspects of IRS’ filing season performance that are of particular interest to taxpayers and that were the source of problems in 1995 are (1) the level of taxpayer service being provided during the filing season, especially the ability of taxpayers to reach IRS by telephone, and (2) the timely issuance of refunds. In 1995, as in the past several years, taxpayers who sought answers to questions about the tax law or their accounts had considerable difficulty reaching IRS by telephone. In 1996, IRS improved its telephone accessibility while, at the same time, it reduced the availability of face-to-face services at its walk-in sites. Also, in 1995, millions of persons had their refunds delayed as a result of new IRS procedures for verifying the SSNs of dependents and EIC-qualifying children. The new procedures were designed to better ensure that persons were entitled to the dependents and EICs they were claiming. In 1996, IRS implemented revised case selection criteria that resulted in many fewer refund delays than in 1995. Sufficient information was not available when we completed our audit work to assess the impact of IRS’ revised procedures on the identification and correction of questionable SSNs. IRS officials have reaffirmed that service to taxpayers remains a primary goal. However, IRS took steps in 1996 to change the blend of methods that it uses to deliver that service. IRS placed more emphasis on providing telephonic and computer-oriented service (such as a new World Wide Web site on the Internet) while walk-in, face-to-face assistance was deemphasized. As a result, telephone accessibility improved while many walk-in sites either closed or offered a reduced level of service. An important indicator of filing season performance is how easily taxpayers who have questions are able to contact an IRS assistor on the telephone (i.e., telephone accessibility). In reports on past filing seasons, we discussed the difficulty taxpayers have had in reaching IRS over its toll-free tax assistance telephone line. Accessibility, as we define it, is the total number of calls answered divided by the total number of calls received. The total number of calls received is the sum of the following: (1) calls answered, (2) busy signals, and (3) calls abandoned by the caller before an assistor got on the line. By our definition, accessibility of IRS’ toll-free telephone assistance improved in 1996, although it was still low. From January 1 to April 20, 1996, IRS reported receiving about 114 million call attempts, of which about 23 million were answered—an accessibility rate of 20 percent. For the same period in 1995, IRS reported receiving about 236.0 million call attempts, of which 19.2 million (8 percent) were answered. As the data for 1995 and 1996 indicate, a major reason for the improved accessibility in 1996 was the significant drop in call attempts. IRS attributed that drop to (1) fewer refund delay notices being issued, as discussed in more detail later in this report, and (2) IRS’ efforts to publicize other information sources, such as its World Wide Web site on the Internet. “For the period January 1, 1996, to April 20, 1996, IRS received calls made by 46 million callers. IRS answered 23 million calls, or 50% of the callers. Of the 114 million total call attempts received, 23 million or 20% received an answer. The remaining 91 million attempts, often the result of redials, received a busy signal or were terminated by the callers because they did not want to wait in queue for an assistor. The total number of callers mentioned earlier was determined by discounting for redials. Therefore, the 114 million call attempts equates to 46 million callers. This is an average of 2.5 attempts per caller.” As IRS’ data indicate, the accessibility of IRS’ toll-free telephone assistance during the 1996 filing season, whether measured as a percentage of calls or callers, was still not good. For the 1996 filing season, IRS closed 93 sites that had previously provided walk-in assistance, reduced the operating hours of some of the 442 sites that remained open, and eliminated free electronic filing at many of the sites. According to IRS, the closed sites were selected on the basis of their historical volume of work and their proximity to other walk-in sites. As an indication of the effect of these closures and cutbacks, IRS data showed that (1) walk-in sites served about 2.8 million taxpayers from January 1 to April 20, 1996, which was about 17-percent fewer taxpayers than were served during the same period in 1995, and (2) about 59,000 electronic returns were filed at walk-in sites in 1996, compared with about 104,000 in 1995. Concerned about the reduction in walk-in service, the House and Senate conference agreement on the Treasury, Postal Service, and General Government appropriation for fiscal year 1997 included a provision that requires IRS to maintain the fiscal year 1995 level of service, staffing, and funding for taxpayer services. While noting that this provision does not mean that IRS should be required to rehire staff or reopen offices, the conference report said that “IRS should be very sensitive to the needs of the taxpayers” who use walk-in sites during the filing season. Walk-in sites provide various free services, including copies of more commonly used forms and publications, help in preparing returns, and answers to tax law questions. We visited four walk-in sites and asked taxpayers where they would go if the office were closed. Many taxpayers commented that they would go to another IRS office or a professional tax preparer for assistance, and that they would call the toll-free forms-ordering telephone number for forms or pick them up at a library or post office. As indicated by the persons with whom we spoke, there are other ways taxpayers can obtain the free services offered by walk-in sites, although maybe not as easily. For example, according to IRS, it generally takes from 7 to 15 workdays to receive materials that are ordered by telephone—longer if the materials are not in stock. Persons with access to a computer can download forms from the Internet or the FedWorld computer bulletin board. Free forms are also available at libraries and post offices and through IRS’ “fax on demand” service. Taxpayers who need help in preparing their returns and do not want to pay for that help may be able to take advantage of the tax preparation services offered at sites around the country that are part of the Volunteer Income Tax Assistance (VITA) and Tax Counseling for the Elderly (TCE) programs. According to IRS, these programs help older, disabled, low-income, and non-English-speaking individuals prepare their basic returns. IRS data for the 1996 filing season indicate that there was an increased demand for services at the VITA and TCE sites. The data showed that although the number of VITA and TCE sites around the country decreased by 513 compared with the 1995 filing season, about 71,000 additional taxpayers took advantage of the service. Taxpayers who need answers to tax law questions can call IRS’ toll-free tax assistance number or IRS’ TeleTax system, which has prerecorded information on about 150 topics. From January 1 to April 27, 1996, the number of tax law calls to TeleTax increased by about 11 percent over the same period in 1995 (i.e., 6.9 million in 1996 compared with 6.2 million in 1995). Still another option for free assistance is IRS’ World Wide Web site on the Internet. Among other things, IRS’ Web site includes copies of forms, information similar to that on TeleTax, and some interactive scenarios that taxpayers can use to help them answer some commonly asked questions. IRS reported that, as of May 1, 1996, its Web site had been accessed more than 52 million times since January 8, 1996, when it first became available. In 1995, IRS took several steps in an attempt to better ensure that persons were entitled to the dependents and EICs they were claiming. The most visible of those efforts involved the delay of about 7 million refunds to allow IRS time to verify SSNs, with an emphasis on returns claiming the EIC. The delays caused adverse reaction from taxpayers and tax return preparers during the 1995 filing season. Although IRS’ efforts in 1995 and the publicity surrounding those efforts appeared to have had a significant deterrent effect (e.g., according to IRS, 1.5 million fewer dependents were claimed in 1995 than were claimed in 1994), the efforts were not without problems. For example, although IRS identified about 3.3 million returns with missing or invalid SSNs and delayed any related refunds, it was able to pursue only about 1 million of those returns. For those cases it was unable to pursue, IRS eventually released any refunds, after holding them for several weeks, without resolving the problems. Also, IRS delayed about 4 million EIC-related refunds for taxpayers whose returns had valid SSNs to check for fraudulent use of the same SSN on more than one return. IRS eventually released almost all of those refunds, after several weeks, without doing the checks. For the 1996 filing season, IRS was more selective in deciding which cases to review and which refunds to delay. IRS tried to limit the number of delayed refunds to the volume of cases it could review and to focus its resources on the most egregious cases. The most significant change for the 1996 filing season was that IRS did not delay EIC refunds on returns with valid SSNs. IRS statistics on the number of refund delay notices sent to taxpayers in 1996, concerning dependent and EIC claims, indicated that IRS delayed far fewer refunds in 1996. As of September 6, 1996, IRS had mailed about 350,000 such notices compared with about 7 million in 1995. Another indicator that fewer refunds were delayed in 1996 is the decrease in the number of “where is my refund” calls to IRS. Taxpayers wanting to know the status of their refunds can call TeleTax and get information through the use of an interactive telephone menu. During the 1996 filing season, as of June 8, 1996, IRS reported receiving 48.2 million such calls, which was a decrease of about 15 percent from the 56.6 million it reported receiving for the same period in 1995. In contrast to the negative reaction from taxpayers and practitioners during the 1995 filing season, an executive of the largest tax preparation firm told us that IRS generally did a better job in 1996. The executive said that the firm’s clients received refunds quicker and received fewer notices about problems, such as SSN mismatches. Likewise, in March 28, 1996, testimony before the Oversight Subcommittee, a representative of the National Association of Enrolled Agents said the following: “Our members report they have encountered far fewer problems this year compared to last year in the area of refund processing . . . .” As part of IRS’ increased emphasis on verifying SSNs in 1995, the Examination function followed up on about 1 million returns that IRS’ computer, using certain criteria, had identified as having questionable SSNs. As of June 30, 1996, about 986,000 of those cases had been closed—about 500,000 (51 percent) with no change in tax liability and about 486,000 (49 percent) with changes totaling about $808 million. In 1996, IRS (1) revised the criteria used to select cases in an attempt to better focus its efforts and (2) identified about 700,000 returns for follow-up, which is about 300,000 fewer than in 1995. Because it takes time for IRS to complete its reviews, information on results was not available at the time we completed our audit work. Thus, we do not know the impact of IRS’ reduced level of effort in 1996. However, a decrease in the number of cases reviewed does not necessarily mean that IRS identified less noncompliance in 1996 than in 1995 because only about one-half of the cases reviewed in 1995 were productive. It is possible that IRS’ revised criteria, despite generating fewer cases, might have identified more productive cases in 1996. The SSN verification/refund delay efforts previously discussed were generally directed at identifying and correcting erroneous refunds caused by honest mistakes or negligence. Since the 1970s, IRS has had a Questionable Refund Program (QRP) directed at identifying fraudulent refund schemes. QRP results for January 1996 through September 1996 showed that IRS had identified 20,521 fraudulent returns (involving claimed refunds of about $55.4 million) during those 9 months. These results are a significant decline from the 59,241 returns and about $124.8 million in refunds reported for the first 9 months of 1995. QRP officials attributed the decline to three things. First, and most significant in their opinion, was a staffing reduction that was part of IRS’ cost-cutting efforts in anticipation of reduced funding levels. According to the officials, the 10 IRS service centers were allocated a total of about 379 full-time equivalent staff for the QRP in fiscal year 1996 compared with 553 full-time equivalent staff in 1995, which was a decrease of 31 percent. The other two reasons cited by the QRP officials were (1) the impact of enhanced upfront filters in the electronic filing system that prevented bad returns from getting into the system and (2) a decision to focus QRP efforts on certain kinds of cases. Although IRS was able to meet its processing goals (such as cycle time,processing accuracy, and refund timeliness) in 1996, those goals were based on expectations as to what IRS could achieve with the systems and procedures currently in place. In that regard, there is general agreement that much can be done to improve those systems and procedures. IRS has initiated several efforts toward that end, including (1) providing alternatives to the filing of paper returns, (2) using scanning and imaging technology to eliminate the manual data transcription of paper returns, and (3) using lockboxes and direct deposits to expedite the processing of tax payments and refunds, respectively. Despite IRS’ generally successful performance during the 1996 filing season, there are still several concerns centering around IRS’ modernization efforts. For example, although more returns were filed using alternatives to the traditional paper form, the number of returns filed through one of those alternatives (electronic filing) fell short of IRS’ projections. Also, although a document scanning and imaging system that was intended to streamline parts of IRS’ paper-processing operations performed better in 1996, the system still is not meeting IRS’ performance expectations and may eventually cost much more than originally estimated. Although data on the results of IRS’ use of lockboxes to process Form 1040 tax payments indicate that the government is saving money, those savings are being diminished significantly by the extra cost associated with having taxpayers send not only their payments but also their returns to the lockbox banks. Finally, expansion of the direct-deposit option for refunds to taxpayers who filed a paper return was not as widely received by taxpayers as IRS had anticipated. As of October 18, 1996, IRS had received about 118.1 million individual income tax returns, which was about 1.5 percent more than the 116.4 million returns received as of the same period in 1995. While the increase in the overall number of returns filed was small, the increase in the number filed through alternative methods was substantially higher than in 1995 (about 50 percent). IRS offers three alternatives to the traditional filing of paper returns (i.e., electronic filing, TeleFile, and Form 1040PC). As shown in table 1, most of the growth in alternative filings was due to TeleFile and Form 1040PC. Table 1 also shows that, of the three alternatives, only electronic filing failed to meet IRS’ projections. Electronic filing has several benefits. It enables taxpayers to receive their refunds sooner than if they had filed on paper and gives them greater assurance that IRS has received their returns and that the returns are mathematically accurate. The benefit for IRS is that electronic filing reduces processing costs and facilitates more accurate processing. IRS began offering electronic filing in 1986. Since that time, 1995 was the first year that the number of individual income tax returns received electronically decreased from the number received the prior year. IRS attributed that decline to the secondary effects of measures it implemented to combat filing fraud. IRS took several steps in an attempt to increase the use of electronic filing in 1996. For example, IRS (1) put increased emphasis on the availability of On-Line Filing, a program that allows taxpayers to file their returns, through a third party, via a personal computer-modem link, and (2) extended the period during which returns could be filed electronically by moving the closing date from August 15 (the filing deadline for taxpayers who get one extension to file) to October 15 (the filing deadline for taxpayers who get a second extension). Taxpayers’ use of electronic filing recovered somewhat in 1996—increasing to about 12.1 million individual income tax returns as of October 18 (about a 9-percent increase). According to IRS, a major contributor to this increase was growth in the Federal/State electronic filing program. Under that program, taxpayers can file both their federal and state income tax returns through one submission to IRS. A taxpayer’s federal and state data are combined into one electronic record that is transmitted to IRS, which, in turn, makes the state portion of the data available to the state. IRS reported that about 3.2 million returns were filed under the Federal/State program in 1996 compared with about 1.6 million in 1995. Some of the increase in electronic filing in 1996 was also due to the steps discussed in the preceding paragraph. According to IRS data, 158,284 taxpayers had used the On-Line Filing option as of October 18, and about 22,000 taxpayers had filed electronically between August 9 and October 18, 1996. Despite the increase in 1996, electronic filings that year were still below the 13.5 million individual returns filed electronically in 1994 and below IRS’ projection of about 13.6 million returns in 1996. A major impediment to the growth of electronic filing is that the method is not completely paperless. Taxpayers must send IRS their W-2s and a signature document (Form 8453) after their return has been electronically transmitted. IRS must then manually input these data and match them to the electronic return. In an attempt to eliminate the paper associated with electronic returns, IRS tested the use of digitized signatures during the 1996 filing season. The goal of that test was to gauge the willingness of taxpayers and preparers to use an electronic signature pad in place of signing a Form 8453. The electronic signature was attached to the electronic return and both were transmitted to IRS. The test was conducted at three locations (two VITA sites located on military bases and a private, tax return preparation office). According to IRS officials, about 50 percent of the taxpayers who were offered the chance to participate in the test agreed to do so. Given the level of participation in 1996 and positive preparer feedback, IRS plans to expand the test in 1997, but details of that expansion will not be finalized until just before the filing season begins. Besides eliminating the paper associated with electronic returns, there are other steps IRS could take to increase the use of electronic filing. In October 1995, we reported that without some dramatic changes in IRS’ electronic filing program, many of the benefits available from electronic filing could go unrealized. We recommended that IRS (1) identify those groups of taxpayers that offer the greatest opportunity to reduce IRS’ paper-processing workload and operating costs if they filed electronically and (2) develop strategies that focus on eliminating or alleviating impediments that inhibit those groups from participating in the program. As of October 9, 1996, IRS was finalizing a new electronic filing strategy. TeleFile generally provides the same benefits to taxpayers and IRS as electronic filing. However, TeleFile is more convenient and less costly than electronic filing because the latter requires that taxpayers go through a third party. The increase in taxpayer use of TeleFile in 1996 was due primarily to the program’s expansion nationwide. As shown in table 1, IRS received about 2.8 million TeleFile returns in 1996, when TeleFile was available to taxpayers in 50 states, compared with 680,000 in 1995, when Telefile was available in only 10 states. Although most of the increase was due to the program’s nationwide expansion in 1996, TeleFile use also showed a significant rate of increase in the 10 states that were in the program in 1995 (from 680,000 returns in 1995 to 804,732 in 1996—an 18-percent increase). A major change that might have contributed to the increase in TeleFile use was IRS’ decision to make TeleFile paperless in 1996. Unlike past years, taxpayers did not have to mail their W-2s or a signature document to IRS. Instead of the signature document, taxpayers used a personal identification number that was provided by IRS. IRS’ Internal Audit Division reviewed the 1996 TeleFile Program and concluded that management had “effectively prepared for and successfully implemented” the nationwide expansion of TeleFile. For example, Internal Audit noted that (1) its sample of returns filed through TeleFile showed that all tax calculations were correctly computed and that data had been posted accurately to IRS’ master file of taxpayer accounts and (2) taxpayer demand for TeleFile during the 1996 filing season was generally met. However, Internal Audit also noted that IRS had not completed a system security certification and accreditation and thus had no assurance that taxpayer data were adequately secured. According to Internal Audit, certification is a comprehensive evaluation of a system’s security features; accreditation is a declaration that the system is approved to operate. As of November 21, 1996, according to the TeleFile Project Manager, IRS was working to complete the certification and accreditation. Internal Audit’s evaluation and various statistics compiled by IRS, including the results of an IRS survey of TeleFile users, indicate that TeleFile worked very well in 1996. For example, about 92 percent of the users surveyed by IRS said that they were very satisfied with TeleFile. However, it is important to note that only about 10 to 14 percent of the more than 20 million 1040EZ filers who IRS estimated would be eligible to use the system in 1996 actually used it. IRS did not survey the nonusers because, according to IRS officials, past surveys showed that the most important reason eligible users cited for not using TeleFile was their preference for a paper version. However, those past surveys did not probe into why nonusers preferred paper. According to the TeleFile Project Manager, IRS plans several changes to TeleFile for the 1997 filing season, which he estimates will increase the participation rate to about 25 percent. For example, he said that eligibility to use TeleFile will be extended to married persons filing jointly and TeleFile users will be able to take advantage of the direct-deposit option that was available to other taxpayers in 1996 (this option is discussed later in this report). The most significant change for 1997, in terms of its potential impact on taxpayer participation, is IRS’ decision to revise the tax package sent to persons eligible to use TeleFile. Instead of sending eligible users a package that also contains a Form 1040EZ and related instructions, in case they choose not to use TeleFile, IRS has decided to send them a much smaller package that contains only the TeleFile worksheet and instructions. Although this action may encourage more persons to use TeleFile and reduce IRS’ overall printing and mailing costs, it could be seen as imposing a burden on persons who, for whatever reason, prefer not to use TeleFile and would, in that case, need a Form 1040EZ. It is unclear how taxpayers will react to this change. On the one hand, IRS summaries of three 1040EZ/TeleFile focus groups held in August and September 1996 indicated that focus group participants did not view the noninclusion of Form 1040EZ as a burden because they could easily get a copy, if needed, from their local library or post office. On the other hand, a mail survey that IRS sent to a random number of TeleFile users in 1996 showed that about 28 percent of the respondents thought it was very important that the 1040EZ information be included in the TeleFile package. The increase in the use of Form 1040PC during the 1996 filing season resulted, in part, from the largest user’s (a tax return preparation firm) rejoining the program after dropping out in 1995. For the 1995 filing season, IRS initially required that preparers provide taxpayers with a specifically formatted legend explaining the Form 1040PC. However, after the 1995 filing season began, IRS decided not to require the specifically formatted legend but to allow preparers to provide any type of descriptive printout that explained each line on the taxpayer’s Form 1040PC. According to an executive of the previously mentioned tax return preparation firm, (1) the firm chose not to participate in the program in 1995 rather than comply with the requirement for a specifically formatted legend and (2) IRS’ decision to change its requirement came too late for the firm to change its plans. The firm then rejoined the program for the 1996 filing season. The Form 1040PC was developed to reduce the number of pages that a standard Form 1040 requires, which is a benefit to taxpayers and IRS, and to streamline paper processing. Although use of the Form 1040PC reduces the amount of paper, IRS has not yet realized the full processing efficiencies available from that form. Because of problems encountered with IRS’ new document scanning and imaging system, as discussed in the next section of this report, IRS terminated plans to have Forms 1040PC scanned and, instead, is manually keying data from the forms into its computers. The Distributed Input System (DIS), which is IRS’ primary data entry system for paper tax returns and other paper documents submitted by taxpayers, has been in operation since 1984. Although DIS generally performed without major problems during the 1996 filing season, its age is a source of concern within IRS. IRS had planned to replace DIS with two document scanning and imaging systems. The first replacement system, the Service Center Recognition/Image Processing System (SCRIPS), was implemented nationwide in 1995 and is not yet performing to IRS’ expectations at that time. On October 8, 1996, IRS announced that the second planned system, the Document Processing System (DPS), was being terminated. IRS experienced significant performance problems with SCRIPS in 1995, which was the system’s first year of nationwide operation. Two major problems were significant system downtime and slow processing rates. IRS made some hardware and software modifications that helped improve the performance of SCRIPS during the 1996 filing season. IRS officials in all five SCRIPS service centers told us that SCRIPS performed significantly better during the 1996 filing season than it did in 1995. Specifically, IRS data for April through June of 1995 and 1996 (the first 3 months for which IRS had comparable data) indicate that system downtime decreased from 791 hours in 1995 to 43 hours in 1996. Despite the improved performance in 1996, SCRIPS (1) is still not processing all of the forms that it was expected to process and (2) may cost more than originally estimated. In an October 1994 business case for SCRIPS, IRS said that, by 1996, the system would be processing all Federal Tax Deposit coupons and information returns, all Forms 1040EZ, 50 percent of the Forms 1040PC, and 93 percent of the Forms 941 (Employers Quarterly Federal Tax Return). In fiscal year 1996, SCRIPS processed all Federal Tax Deposit coupons and information returns, as expected. However, SCRIPS only processed about 50 percent of the Forms 1040EZ and did not process any Forms 1040PC or Forms 941. In addition, the cost estimate for SCRIPS has increased from $133 million in October 1992 to a current estimate of $288 million. Part of the increase is due to the inclusion of certain costs, such as for maintenance, that were not part of the original estimate. We will be issuing a separate report that has more information on SCRIPS’ problems in 1995, its performance in 1996, and IRS’ plans for the system in the future. A second scanning system, DPS, was to replace SCRIPS and expand IRS’ imaging capability to more complex tax forms. IRS expected DPS to begin handling some of the DIS workload by the start of the 1998 filing season. However, due to concerns about the future of DPS, IRS reassessed its strategy for processing paper tax returns. According to IRS, part of the reassessment involved options, such as outsourcing the processing of some returns and/or acquiring a new manual data entry system to replace DIS. As of September 26, 1996, according to a cognizant IRS official, the reassessment was done but a final decision had not yet been reached. That reassessment took on added importance when IRS announced, on October 8, 1996, that DPS was being terminated. IRS attributed that decision, at least in part, to budgetary concerns and “the need to prioritize investments in systems that have a direct and immediate benefit on improved customer service, such as better telephone access.” The uncertainty of IRS’ plans for processing paper returns means that IRS may have to continue to rely on DIS longer than it had originally expected. In a February 1996 report, Internal Audit said that DIS could be required to process forms until 2003. Over the course of the 1996 filing season, various service center officials expressed concern about IRS’ ability to adequately maintain and repair the system. Despite their concerns, DIS performed satisfactorily during the filing season. Officials also told us that, until this year, IRS had not kept detailed maintenance records to capture DIS downtime. Thus, an accurate comparison of DIS downtime and system reliability over the years is not possible. We recently began a review of IRS’ ability to maintain current operating levels with its existing systems. IRS envisions that by 2001, most tax payments will be processed by lockbox banks rather than by IRS service centers. The banks process the payments and transfer the funds to a federal government account. The payment and payer information are then recorded on a computer tape and forwarded to IRS for use in updating taxpayer accounts. One reason for using lockboxes is the expectation that tax payments will be deposited faster into the Treasury. Faster deposits mean that the government has to borrow less money to fund its activities and less borrowing means lower interest costs (otherwise known as “interest cost avoidance”). Since 1989, IRS has used lockboxes to process payments sent in with estimated tax returns (Forms 1040ES). For the last several years, IRS has been testing the use of lockboxes to process payments sent in by individuals when they file their income tax returns (Forms 1040). For the 1996 test, IRS sent special Form 1040 packages to specific taxpayers. These packages included (1) mailing instructions and (2) a payment voucher that could be scanned by optical character recognition equipment. The test packages contained one return envelope with two different tear-off address labels. One label, which was addressed to a lockbox, was to be used for a return with an accompanying tax payment, and the other label, which was addressed to a service center, was to be used for a return with no payment. Taxpayers with payments were instructed to put their returns, payments, and vouchers in the envelope in their tax packages and to affix the label addressed to the lockbox. The bank that serviced the lockbox was to separate the returns from the payments, deposit the payments, record the payment information on a computer tape, sort the returns, and forward the returns and the computer tape to IRS for processing. IRS had tested another mailing method during the 1994 and 1995 filing seasons. This test involved the use of two envelopes. One envelope was addressed to a service center, and the other envelope was addressed to a lockbox. Taxpayers were instructed to put their tax returns in the envelope addressed to the service center and to put any payments and vouchers in the envelope addressed to the lockbox. The bank was to process the payments and vouchers as previously described. IRS has decided, for the 1997 filing season, to continue testing the two-label method in certain tax packages. According to an IRS official responsible for the lockbox program, IRS will no longer use the two-envelope approach due to the increased taxpayer burden IRS anticipates the approach would cause. She explained that IRS has found, in its studies of taxpayer behavior, that, among other things, taxpayers who participated in the test preferred to keep their remittances and returns together. Because of this, IRS believes that asking taxpayers to split their tax payments from their returns is burdensome. The studies referred to by IRS, all of which were done by a contractor in 1993 and 1994, included mail and telephone surveys of about 1,900 taxpayers, interviews with 46 individuals, and 5 taxpayer focus groups. We reviewed the contractor’s reports and considered the results to be inconclusive as they related to burden. For example, of the people surveyed by mail and telephone who said they remembered what they did in the test, 45.9 percent said that they felt uneasy about mailing their checks and returns in separate envelopes while 41.2 percent said that they did not feel uneasy (the other 12.9 percent did not know). The results of the 46 interviews showed a similar lack of consensus, in our opinion. Several people said that they preferred using one envelope because it was easier or because they were worried about the payments and the tax returns not getting linked if they were sent to two different places. But, several other people said that they preferred using two envelopes because they were concerned about the confidentiality of their tax returns or the increased risk of their returns getting lost. Even some of those who preferred one envelope expressed concern about the banks’ involvement in handling their returns. Burden is one issue to consider in deciding on the use of lockboxes; cost is another. Information we received from IRS and FMS indicates that having taxpayers send their returns to the lockboxes along with their payments has substantially increased the cost of the lockbox service to the government. During the first 8 months of the 1996 filing season, according to IRS, the lockbox banks had processed about 7 million Form 1040 payments. According to FMS, the government paid the banks an average of $2.03 per payment in 1996—98 cents to process each payment, 92 cents to sort each accompanying tax return, and 13 cents to ship each return to a service center—and the same fees will be in effect until April 1, 1997.Fees after that date are subject to negotiation between FMS and the banks. Cognizant FMS staff said that the banks have been charging such a high fee for sorting returns to encourage IRS to stop having the returns sent to the banks. Service centers process returns received from a lockbox bank in the same manner as they process returns that come directly from taxpayers, with one exception—the returns coming from the bank do not have to be sorted by IRS. According to IRS data, not having to sort the returns saves IRS about 37 cents a return—much less than the 92 cents per return being charged by the banks. Thus, assuming a volume of 7 million returns, the government paid about $6.4 million for a service (return sorting) that it could have done itself for about $2.6 million, or about $3.8 million less. Shipping those returns cost the government another $910,000. According to FMS, the use of lockboxes to process Form 1040 tax payments enabled the government to avoid interest costs of $15.7 million in fiscal year 1996. This interest cost avoidance compares with $1.6 million in fiscal year 1995. Because these savings result from faster processing of tax payments, having the banks sort and ship the tax returns does not add to the savings and could, by increasing the banks’ workload, cause processing delays that would reduce any savings. In an August 30, 1996, letter to Treasury’s Fiscal Assistant Secretary, IRS’ Deputy Commissioner acknowledged the high costs associated with having returns sent to lockboxes. In a September 11, 1996, reply, the Assistant Secretary also expressed some concern about the costs associated with the processing of Form 1040 tax payments through lockboxes. The Assistant Secretary said that “he most appealing option from a cost standpoint is the two-envelope concept. This option . . . makes good business sense as tax payments and tax returns are sent to the appropriate place best prepared to handle them.” As a way to lower costs, the Assistant Secretary suggested that IRS explore the possibility of not having the banks sort the returns and have the sorting done by the service centers. We discussed this option with officials in IRS’ National Office and at one service center. We were told that it would be difficult for service centers to sort the returns once they had been separated from the payments because the service center would not know if the taxpayer had fully paid his or her tax liability. According to the IRS officials, that distinction is important because, as previously discussed, returns involving less than full payment are given priority processing to enable more timely issuance of the balance-due notice to the taxpayer. IRS had considered adding a checkbox on the return for the taxpayer to indicate whether full payment was enclosed with the return. According to IRS, asking taxpayers to check such a box would be another form of burden—although not a significant one. Security is a third issue that needs to be considered in deciding how to use lockboxes. As previously noted, several individuals who participated in the focus groups and interviews about IRS’ use of lockboxes expressed concern that their returns would be lost or their return data would be misused. We did not do a thorough analysis of security at the lockbox banks. However, we reviewed security and processing procedures at 2 of the 10 lockbox banks and found that controls exist to minimize the risk of lost or misused tax data. IRS’ lockbox procedures require that the tax returns be separated from the payment as soon as the envelope is opened. Only personnel who open the envelopes and their supervisors are to have access to the returns. Security cameras are to monitor all of the lockbox processing. The returns are to be bundled and packed into boxes as soon as they are separated from the payment. Each day, the boxes of returns are to be shipped by bonded courier to the service center. Background checks, such as a criminal record check, are to be done on lockbox personnel hired by the bank. These are the same checks that are to be done on IRS service center personnel with the same duties. Bank personnel, like service center employees, are to sign statements of understanding about the confidentiality of the information they will process and the penalties for disclosing any of this information. IRS and FMS lockbox coordinators are to visit the banks to ensure compliance with these procedures and are to submit quarterly reports on the basis of those visits. An FMS staff person who was responsible for IRS’ lockbox processing program told us there have been no known incidents of disclosure of taxpayer information from a lockbox bank. During our visits to the two banks, we observed the security-surveillance cameras in operation and verified that badges were being worn by all personnel and that access to the processing area was controlled by a guard. We also reviewed judgmental samples of personnel files and, for each employee whose file we reviewed, we (1) found that disclosure statements were maintained and (2) saw evidence that background checks had been done. Unlike past years, IRS allowed taxpayers who filed paper returns in 1996 to request that their refunds be deposited directly to their bank account through an electronic fund transfer. IRS included a Form 8888 (Request for Direct Deposit of Refund) in almost all paper tax packages. IRS estimated that about 5 million taxpayers who filed paper returns would request the direct-deposit option and, on average, that the option would enable paper filers to get their refunds 10 days faster than if they had waited for a paper check. IRS also estimated that it would cost about 25-percent less to process a Form 8888 than it costs to mail a paper refund check (20 cents per form v. 27 cents per paper check). Only about 1.6 million taxpayers took advantage of the direct deposit option. An IRS official said that IRS will retain its goal of about 5 million direct-deposit refunds for the 1997 filing season. IRS has taken a couple of steps to enhance its chances of achieving that goal. Most significantly, it has eliminated the Form 8888. Instead of having a separate form, most of the individual income tax forms will be revised to provide space for the taxpayer to request a direct deposit and to provide the necessary bank account information. Also, as previously noted, TeleFile users will be able to request a direct deposit in 1997. Was the 1996 filing season a success? The answer depends on one’s perspective. From IRS’ standpoint, it was a success. IRS met or exceeded all but one of its performance goals and was very close to meeting the other. IRS was able to process individual income tax returns and refunds without any apparent problem, with its aging computer systems having made it through another filing season. From the taxpayer’s perspective, the filing season was also successful in many key respects. For example, relatively few refunds were delayed in 1996, unlike 1995 when millions of taxpayers were angered by IRS’ decision to delay their refunds while it checked dependent and EIC claims; more taxpayers were given the opportunity to file by telephone and to have their refunds directly deposited into their bank accounts; and IRS’ World Wide Web site on the Internet provided a convenient source of information for taxpayers with access to a computer. However, there were some problems in 1996. Although the accessibility of IRS’ toll-free telephone assistance improved, taxpayers continued to have problems reaching IRS by telephone, and some taxpayers may have been inconvenienced by the reduction in IRS’ walk-in services. IRS has several efforts under way to modernize the systems and procedures it has used for many years to process returns, remittances, and refunds. These efforts are essential if IRS is to successfully meet the demands of future filing seasons. To date, the results of those efforts have been mixed. IRS has taken steps to enhance its efforts. For example, IRS is (1) expanding eligibility for TeleFile and taking other steps in an effort to increase the use of that filing alternative, (2) working to make electronic filing paperless by broadening its test of digitized signatures, (3) making it easier for taxpayers to request direct deposits of their refunds, and (4) reassessing its strategy for processing paper tax returns. Even if IRS is successful in increasing the TeleFile participation rate to 25 percent in 1997, that would still leave a large number of eligible users who choose not to use TeleFile. We believe that IRS’ efforts to expand the use of TeleFile could be enhanced if it had more specific information on why eligible users prefer to file on paper. More specifics might help IRS identify barriers to TeleFile use and develop mitigating strategies. We also question whether IRS’ decision to have taxpayers send both their tax returns and their tax payments to lockboxes and to have banks sort those returns adequately considered both the costs to the government and taxpayer burden. Although it is important to minimize taxpayer burden, the evidence we were given was not convincing concerning the amount of burden associated with using two envelopes, especially in light of the extra cost to the government associated with using one envelope (about $4.7 million during the first 8 months of the 1996 filing season). It is understandable that persons contacted by IRS’ contractor, when asked to choose between one or two envelopes, would pick one, because it is easier to put everything into one envelope than to segregate things into two envelopes and pay additional postage. But, it is not clear that those persons considered the use of two envelopes an unreasonable burden. Nor is it clear how those persons might have responded if they were told that the use of one envelope causes the government to spend several million dollars more than it would if taxpayers used two envelopes. The cost associated with using lockboxes to process Form 1040 tax payments might become less of an issue if the government is able to negotiate bank fees for sorting that are more comparable to the service center costs for that activity. Absent lower fees, an alternative is to continue to have returns sent to the bank but to have the banks ship the returns to the service centers unsorted. That would require IRS to add a checkbox to the return (which would also be required if IRS decided to use two envelopes) but checking a box would likely be perceived by taxpayers as less of a burden than using two envelopes. However, while a reduction in bank fees or a decision to accept returns from the banks unsorted would make the one-envelope method more advantageous, they would not relieve the anxiety expressed by some taxpayers about their returns being lost or misused by bank personnel. If most eligible TeleFile users do not use the system during the 1997 filing season, as IRS is anticipating, we recommend that the Commissioner of Internal Revenue conduct a survey to determine why, including more specific information on why the nonusers prefer to file on paper, and take steps to address any identified barriers to increased user participation. If the government is unable to negotiate lockbox fees that are more comparable to service center costs and in the absence of more compelling data on taxpayer burden, we recommend that the Commissioner, for filing seasons after 1997, either discontinue having returns sorted by the banks or reconsider the decision to have taxpayers send their tax returns to the banks along with their tax payments. We are not making any recommendations in this report to address problems with telephone accessibility and electronic filing because we have recently issued separate reports on these topics. We will also be issuing a separate report on SCRIPS. We requested comments on a draft of this report from the Commissioner of Internal Revenue or her designated representative. Responsible IRS officials, including the Assistant Commissioner for Forms and Submission Processing, the National Director for Submission Processing, and the National Director for Customer Service (Planning and Systems), provided IRS’ comments in a November 6, 1996, meeting. Those comments were reiterated in a November 18, 1996, memorandum from the Acting Chief of Taxpayer Service. IRS officials also provided some factual clarifications that we incorporated in the report where appropriate. IRS agreed with our recommendation that it determine why more eligible taxpayers do not use TeleFile, including more specific information as to why nonusers prefer to file on paper. IRS officials told us that by the end of fiscal year 1997, IRS would conduct a focus group study of TeleFile nonusers to determine why they prefer to file on paper and to identify any barriers. IRS officials said that steps have also been taken to address some concerns identified by past nonuser surveys. IRS believes that taxpayers’ preference for paper returns is linked to their familiarity with the form. The TeleFile worksheet that taxpayers had been instructed to fill out and maintain as a record of their filing did not have the same “official” appearance as a tax form. For the 1997 filing season, according to IRS officials, TeleFile users will be instructed to complete a TeleFile Tax Record instead of a worksheet. As described by the officials, the TeleFile Tax Record will (1) include lines for the taxpayer’s name and address, (2) look more like the Form 1040EZ, and (3) be an official document. IRS hopes this change will provide potential TeleFile users with a higher comfort level. IRS officials also said that advertisements and other publicity tools that were used in 1996 will be emphasized again in 1997 to educate the public on the simplicity of using TeleFile. In commenting on our second recommendation, IRS officials said that IRS, in conjunction with FMS, has formed a task force to identify a long-term solution for 1998 and beyond for directing Form 1040 tax payments to lockboxes. According to the officials, the group has been tasked with (1) identifying options that complement Treasury’s goals of increasing the availability of funds and reducing the cost of collecting federal funds, (2) reviewing what is required of lockboxes by IRS to minimize operational and ancillary costs, and (3) making recommendations to management. The group is scheduled to present their findings to management by March 1997. This time frame should provide IRS with information to make a decision on Form 1040 tax payment processing that could be implemented for the 1998 filing season. We are sending copies of this report to the Subcommittee’s Ranking Minority Member, the Chairmen and Ranking Minority Members of the House Committee on Ways and Means and the Senate Committee on Finance, various other congressional committees, the Secretary of the Treasury, the Commissioner of Internal Revenue, the Director of the Office of Management and Budget, and other interested parties. Major contributors to this report are listed in appendix II. Please contact me on (202) 512-9110 if you have any questions. Answer 19.2 million calls 22.9 million calls were answered (116% of schedule) answered (119.5% of schedule) Provided 50% level of access 90% were answered accurately 91% were answered accurately 4.2 million calls answered (95.7% of schedule) 3.9 million calls answered (98.3% of schedule) (Table notes on next page) Code and Edit staff prepare returns for computer entry by, among other things, ensuring that all data are present and legible. The “returns processing productivity” indicator is based on the number of weighted returns processed, which includes all returns whether they were processed manually, through scanning equipment, or electronically. The different types of returns are weighted to account for their differing processing impacts. For example, a paper Form 1040 has a higher weighting factor than a paper Form 1040EZ, which in turn has a higher weighting factor than electronically processed returns. Cycle time is the average number of days it takes service centers to process returns. The “refund timeliness” indicator is based on a sample of paper returns and is calculated starting from the signature date on the return to the date the taxpayer should receive the refund, allowing 2 days after issuance for the refund to reach the taxpayer. As discussed in our report on the 1995 filing season (GAO/GGD-96-48), the 36-day accomplishment cited for 1995 was slightly understated by the exclusion of certain refunds that, according to IRS’ standards, should have been included. That issue was not a problem in 1996. The “calls scheduled to be answered” indicator is the number of telephone calls IRS believes its call sites will be able to answer with available resources. The indicator does not reflect the number of calls IRS expects to receive. The “level of access” indicator is the number of calls answered divided by the number of individual callers. See pages 4 to 6 for more information on this indicator. Katherine P. Chenault, Senior Evaluator Jyoti Gupta, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Internal Revenue Service's (IRS) overall performance during the 1996 tax filing season, focusing on: (1) changes in 1996 that relate to taxpayer services and the processing of taxpayer refunds; and (2) some of IRS' efforts to modernize its processing activities. GAO found that: (1) IRS met or exceeded its timeliness and accuracy goals for processing individual income tax returns and issuing taxpayer refunds, answered more telephone calls from taxpayers seeking assistance than it had planned to answer, and received more returns through alternative filing methods than it had projected; (2) for the 1996 tax filing season, IRS revised its procedures to limit the number of delayed refunds to the volume of cases it could review, and focus on the cases most in need of review, and as a result, IRS delayed many fewer refunds in 1996 than it did in 1995 and avoided the kind of negative press it received in 1995 as taxpayers and tax return preparers reacted to the delays; (3) recognizing that much could be done to improve its systems and procedures, IRS has initiated several modernization efforts, and those efforts achieved mixed results in 1996; (4) IRS is developing a strategy to increase the use of electronic filing and reassessing its strategy for processing paper returns; and (5) IRS' decision to have taxpayers send not only their payments but also their tax returns to a lockbox and to have the banks sort those returns before sending them to IRS has increased program costs, unnecessarily in GAO's opinion, by $4.7 million.
In 1994, six Members of Congress expressed concern about a White House official’s use of a military helicopter to visit Camp David and a golf course on May 24, 1994. Accordingly, we were asked to determine (1) the frequency of helicopter flights by White House staff from January 21, 1993, to May 24, 1994, and (2) whether applicable White House procedures were followed in requesting and approving the May 24 trip to Camp David and the golf course. Since 1976, the Marine Corps HMX-1 Squadron in Quantico, Virginia, has been responsible for providing helicopter support to the White House. The squadron is specifically tasked to fly the President, Vice President, First Lady, wife of the Vice President, and visiting Heads of State. White House staff may be authorized to use HMX-1 helicopters when they are directly supporting the President, Vice President, First Lady, and wife of the Vice President or conducting immediate White House activities. Manual records of flights taken by, or in support of, the President, Vice President, First Lady, wife of the Vice President, or Heads of State, are maintained at the squadron’s Quantico facilities. According to HMX-1 manual records, approximately 1,200 flights were flown in support of the President, Vice President, First Lady, wife of the Vice President, and Heads of State during the 16 months before May 24, 1994. These records indicated that, as previously disclosed by the White House, staff members flew in military helicopters 14 times without the President, Vice President, First Lady, wife of the Vice President, or Heads of State during this period. We performed several tests, which I will discuss, to verify the completeness and accuracy of the HMX-1 manual records. Our work did not identify any additional White House staff flights. We reviewed approximately 1,200 manual records (HMX-1 after-action reports) of flights by or in support of the President, Vice President, First Lady, wife of the Vice President, and Heads of State. The after-action report, which is filed by the pilot, identifies the passengers, an itinerary, and the flight crew and is retained by the HMX-1 White House Liaison Office in Quantico. Among the after-action reports we examined were the 14 flights previously reported by the White House as the only flights taken by White House staff when the President, Vice President, First Lady, wife of the Vice President, or Heads of State were not on board. According to officials from the White House Military Office and the HMX-1 Squadron and an associate counsel to the President, the after-action reports we reviewed covered all White House-related flights between January 21, 1993, and May 24, 1994. We performed four tests to independently verify the completeness and accuracy of the manual records maintained by the HMX-1 Squadron. As our first test, we compared the President’s itinerary, as reported in the Weekly Compilation of Presidential Documents, with HMX-1 after-action reports. We then listed instances in which the President had traveled, but no after-action reports existed. A White House official then provided us documents from the Presidential Diarist and the Secret Service. These documents verified that the President had used other forms of transportation on the days in question. Next, we compared the records maintained at HMX-1 with the flight records in the Navy’s automated Naval Flight Record Subsystem. This database is part of a larger automated flight record system used to track and manage all naval aircraft flights. The database is maintained by the Navy and the Marine Corps and contains flight information provided by pilots after each flight. The automated data we obtained covered 6,120 flights of HMX-1 aircraft from January 21, 1993, to May 24, 1994. We found the records maintained at HMX-1 to be more complete than those maintained in the database. Third, during our review of the previously reported 14 White House staff flights, we found that 10 had a squadron-specific mission purpose code. According to a Marine Corps official, pilots are to assign this HMX-1 squadron-specific mission purpose code to all flights for logistical support of an executive aircraft, as well as any flight by White House staff that is not directly associated with a flight taken by the President, Vice President, First Lady, wife of the Vice President, or Heads of State. We searched the automated database for all flights with this specific code and found 72 more flights. Of the 72 flights, 34 were included in the records we had reviewed at HMX-1. The remaining 38 flights had no after-action reports. Because it was unclear whether after-action reports should have been completed for the 38 flights, we asked for clarification. We ultimately confirmed why the 38 flights had not been included in the flight records we reviewed at the HMX-1 Squadron. Some flights with no after-action reports included flights to and from contractors for maintenance, flights to test facilities, and support for presidential travel. As one last check that the squadron had not inadvertently omitted a flight from the after-action reports we had reviewed, we interviewed 52 pilots still assigned to the squadron who had flown a White House mission during the 16-month period of our review. In the presence of officials from the White House and the HMX-1 Squadron, we asked the pilots if they had ever flown a White House mission without filing an after-action report. All the pilots said that they always filed after-action reports when they flew missions in support of the White House. At the time of the May 24 trip to Camp David and a golf course, White House policy required that White House Military Office officials approve all HMX-1 helicopter travel by White House staff. The former Deputy Director of the White House Military Office stated that he had approved the use of an HMX-1 helicopter for the May 24 trip. However, no written procedures detailed how such flights were requested or approved. White House Military Office officials told us that the infrequency of helicopter use by the White House staff made written policies and procedures unnecessary; each request had to be considered on an individual basis. The former Deputy Director also told us that the request and approval for helicopter service for the May 24 trip, like most requests for helicopter service, were made orally. Shortly after the May 24 trip, the White House changed the approval authority for staff’s use of military aircraft. According to a May 31, 1994, memorandum, the approval authority was elevated from the level of the Deputy Director of the White House Military Office to the White House Chief of Staff or the Deputy Chief of Staff. For trips that involve the Chief of Staff, the approving authority is now either the White House Counsel or the Deputy White House Counsel. Now let me turn to the issue of senior-level officials traveling on government aircraft. Approximately 500 fixed-wing airplanes and 100 helicopters are used for DOD’s OSA mission, which includes transporting senior-level officials in support of command, installation, or management functions. The Secretary of Defense has designated some DOD senior-level travelers as required use travelers (1) because of their continuous requirement for secure communications, (2) for security, or (3) for responsive transportation to satisfy exceptional scheduling requirements. However, the military department secretaries may apply more stringent restrictions in determining which four-star officers within their respective departments must use these aircraft. DOD policy excludes some aircraft, such as those assigned to the Air Force 89th Military Airlift Wing, from the OSA mission. The 89th Wing provides worldwide airlift support for the President, Vice President, and other high-level officials in the U.S. and foreign governments. The Office of Management and Budget has made the General Services Administration (GSA) responsible for managing civilian agencies’ aircraft programs. DOD, like the civilian agencies, is required to report data to GSA semiannually on senior-level, civilian officials’ travel. DOD’s policy states that the OSA inventory of fixed-wing aircraft should be based solely on wartime requirements. During our review, however, we found that each service had established its own wartime requirements based on differing definitions and methodologies. As of April 1995, the services reported 520 fixed-wing aircraft in DOD’s OSA inventory. Our review showed that only 48 OSA aircraft were used in theater during the Persian Gulf War, which is less than 10 percent of the April 1995 OSA inventory. In 1994, the Air Force determined that its OSA inventory exceeded its wartime requirements, whereas the Army, Navy, and Marine Corps determined that their OSA inventories were slightly less than wartime requirements. However, a February 1993 report on Roles, Missions, and Functions issued by the Chairman of the Joint Chiefs of Staff and the May 1995 report of the Commission on Roles and Missions of the Armed Forces indicated that the existing number of aircraft dedicated to OSA missions had been and continued to be excessive. To correct this problem, we recommended in our June report that the Secretary of Defense (1) provide uniform guidance to the services concerning how to compute OSA wartime requirements, (2) develop the appropriate mechanisms to ensure the availability of each service’s aircraft to help fulfill the OSA needs of the other services, and (3) reassign or otherwise dispose of excess OSA aircraft. Additionally, in our September report on the 1996 DOD operation and maintenance budget, we recommended that Congress direct the Air Force to reduce its OSA inventory to its wartime requirements, which would save $18.1 million in operation and maintenance costs. To address the recommendations in our June report, the Joint Chiefs of Staff studied OSA wartime requirements across DOD, including how the availability of each service’s aircraft could help fill the needs of the other services. The resulting October 1995 report established a joint requirement for 391 OSA aircraft and developed a common methodology for determining OSA requirements. The Chairman submitted the report later in October to the Deputy Secretary of Defense, requesting his approval for the OSA fleet to be sized at 391 aircraft, which would mean a reduction of over 100 aircraft. The disposition of excess OSA aircraft is currently under review. Further, DOD plans to update its policy on OSA to formalize the definition, use, and management of OSA aircraft. Plans are also underway to assign to the Joint Chiefs of Staff responsibility for determining DOD’s annual OSA requirements. Adverse publicity and increased congressional concern about potential abuses resulted in a number of statements during 1994 by the White House and the Secretary of Defense emphasizing the need for senior officials to carefully consider the use of commercial transportation instead of government aircraft. On May 9, 1995, the Deputy Secretary of Defense issued a revised policy memorandum that eliminates an entire category of “required mission use” for justifying individual OSA flights and requires that many more OSA flights be justified based on a cost comparison between DOD’s OSA aircraft and commercial carriers. Our review indicated that from March 1993 to February 1995, the number of senior-level officials’ OSA flights generally declined. During that period, the number of senior officials’ OSA flight segments per month ranged from a high of about 1,800 in March 1993 to a low of about 1,000. We found that 16 of the 20 destinations most frequently traveled to by senior-level DOD officials were also served by commercial airlines with government contracts. For example, 1,619 flight segments from Andrews Air Force Base, Maryland, to Wright-Patterson Air Force Base, Ohio, could have been served by government-contract airlines. It should be recognized, however, that some of the trips we identified were made by those senior-level officials required to use government aircraft and that the contract flights may not have provided the same scheduling flexibility made possible by government-owned aircraft. On October 1, 1995, the Deputy Secretary of Defense issued a new policy on travel that should help decrease the potential for abuse. The new policy (1) requires the services to use the smallest and most cost-effective mission-capable aircraft available; (2) requires the Secretary of Defense’s or the military department secretary’s approval for use of military aircraft by required use officials for permanent change-of-station moves;(3) prohibits the scheduling of training flights strictly to accommodate senior-level officials’ travel; (4) allows the military department secretaries to further restrict the required use designation for four-star officers in their respective departments; and (5) limits the use of helicopters for senior-level officials’ travel. Although senior-level officials’ use of helicopters in the Washington, D.C., area declined substantially between April 1994 and March 1995, these officials continued to use helicopters to travel between nearby locations. For both the Air Force and the Army, the most frequently traveled helicopter route was between Andrews Air Force Base and the Pentagon, a distance of about 15 miles. According to an Army memorandum, flying time for an Army UH-1H from Andrews Air Force Base to the Pentagon is about 24 minutes—at a cost of about $185. The same flight in an Air Force UH-1N would cost approximately $308. However, actual cost to the government would be higher because all trips are round trips. In the case of the Army, the cost to get a helicopter to the Pentagon or Andrews Air Force Base must be included, which would increase the flight time to about 1 hour and the cost to about $460. We estimate that the same trip would cost about $9 by car and about $30 by taxi. Thus, for general comparison purposes, a trip between Andrews Air Force Base and the Pentagon on either an Army or Air Force helicopter would cost over $400 more than the same trip by car. In December 1994, the Secretary of the Army established a new policy prohibiting Army officials’ use of helicopter transportation between the Pentagon and Andrews Air Force Base except in unusual circumstances. The memorandum stated that the existence of unusual circumstances would be determined by the Secretary of the Army or the Chief of Staff of the Army. In our report, we recommended that the Department of Defense adopt this policy. The October 1995 revisions to DOD’s policy on the use of government aircraft and air travel include a section on helicopter travel. The new policy states that “rotary wing aircraft may be used only when cost favorable as compared to ground transportation, or when the use of ground transportation would have a significant adverse impact on the ability of the senior official to effectively accomplish the purpose of the travel.” We believe that this change in policy should result in fewer helicopter trips between the Pentagon and Andrews Air Force Base, as well as other nearby destinations. At the time of our June report, civilian agencies had over 1,500 aircraft that cost about $1 billion a year to operate. The civilian agency inventory includes many different types of aircraft, such as helicopters, special-purpose aircraft for fire-fighting and meteorological research, and specially configured aircraft for research and development and program support. However, only 19 are routinely used for senior-level officials’ travel. These 19 aircraft cost about $24 million a year to operate. The operating costs reflect aircraft that are owned, leased, lease/purchased, and loaned between civilian agencies. For most agencies, the operating costs include those related to technical, mission-critical aircraft that are not used for administrative purposes. We also reviewed the National Aeronautics and Space Administration and Coast Guard senior officials’ use of aircraft and found that, although the use of such aircraft was infrequent, when these aircraft are used, many of the destinations were served by commercial airlines with government contracts. Inspector General reports indicate that agencies were not adequately justifying the need for aircraft acquisitions and that agencies’ cost comparisons with commercial service were not complete or accurate. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or other members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the use of military helicopters and other government aircraft to transport White House staff and senior-level military and civilian officials. GAO noted that: (1) White House staff members had flown in military helicopters 14 times from January 21, 1993 to May 24, 1994 without the accompaniment of the President, Vice President, First Lady, Vice-President's wife, or Heads of State; (2) Department of Defense (DOD) policy states that the military services' operational support airlift (OSA) inventory of fixed-wing aircraft should be based strictly on wartime requirements, but DOD has not provided guidance on how the services should count their OSA aircraft or determine their wartime requirements; (3) the April 1995 OSA inventory of 520 fixed wing aircraft exceeds the Air Force's wartime requirements; (4) the military helicopters located in the Washington, D.C. area are not justified based on OSA wartime requirements; (5) the most frequent flight for DOD senior officials is to or from Andrews Air Force Base, MD; (6) in response to GAO recommendations, the Joint Chiefs of Staff has recommended a reduction in the number of OSA aircraft and DOD has strengthened the policy governing the use of OSA aircraft by senior-level travelers; and (7) only 19 of 1,500 aircraft operated by civilian agencies are used to routinely transport senior-level officials.
OPS, within the Department of Transportation’s Research and Special Programs Administration (RSPA), administers the national regulatory program to ensure the safe transportation of natural gas and hazardous liquids by pipeline. The office attempts to ensure the safe operation of pipelines through regulation, national consensus standards, research, education (e.g., to prevent excavation-related damage), oversight of the industry through inspections, and enforcement when safety problems are found. The office uses a variety of enforcement tools, such as compliance orders and corrective action orders that require pipeline operators to correct safety violations, notices of amendment to remedy deficiencies in operators’ procedures, administrative actions to address minor safety problems, and civil penalties. OPS is a small federal agency. In fiscal year 2003, OPS employed about 150 people, about half of whom were pipeline inspectors. Before imposing a civil penalty on a pipeline operator, OPS issues a notice of probable violation that documents the alleged violation and a notice of proposed penalty that identifies the proposed civil penalty amount. Failure by an operator to inspect a pipeline for leaks or unsafe conditions is an example of a violation that may lead to a civil penalty. OPS then allows the operator to present evidence either in writing or at an informal hearing. Attorneys from RSPA’s Office of Chief Counsel preside over these hearings. Following the operator’s presentation, the civil penalty may be affirmed, reduced, or withdrawn. If the hearing officer determines that a violation did occur, the Office of Chief Counsel issues a final order that requires the operator to correct the safety violation (if a correction is needed) and pay the penalty (called the “assessed penalty”). The operator has 20 days after the final order is issued to pay the penalty. The Federal Aviation Administration (FAA) collects civil penalties for OPS. From 1992 through 2002, federal law allowed OPS to assess up to $25,000 for each day a violation continued, not to exceed $500,000 for any related series of violations. In December 2002, the Pipeline Safety Improvement Act increased these amounts to $100,000 and $1 million, respectively. The effectiveness of OPS’s enforcement strategy cannot be determined because OPS has not incorporated three key elements of effective program management—clear performance goals for the enforcement program, a fully defined strategy for achieving these goals, and performance measures linked to the goals that would allow an assessment of the enforcement strategy’s impact on pipeline safety. OPS’s enforcement strategy has undergone significant changes in the last 5 years. Before 2000, the agency emphasized partnering with the pipeline industry to improve pipeline safety rather than punishing noncompliance. In 2000, in response to concerns that its enforcement was weak and ineffective, the agency decided to institute a “tough but fair” enforcement approach and to make greater use of all its enforcement tools, including larger and more frequent civil penalties. In 2001, to further strengthen its enforcement, OPS began issuing more corrective action orders requiring operators to address safety problems that had led or could lead to pipeline accidents. In 2002, OPS created a new Enforcement Office to focus more on enforcement and help ensure consistency in enforcement decisions. However, this new office is not yet fully staffed, and key positions remain vacant. In 2002, OPS began to enforce its new integrity management and operator qualification standards in addition to its minimum safety standards. Initially, while operators were gaining experience with the new, complex integrity management standards, OPS primarily used notices of amendment, which require improvements in procedures, rather than stronger enforcement actions. Now that operators have this experience, OPS has begun to make greater use of civil penalties in enforcing these standards. OPS has also recently begun to reengineer its enforcement program. Efforts are under way to develop a new enforcement policy and guidelines, develop a streamlined process for handling enforcement cases, modernize and integrate the agency’s inspection and enforcement databases, and hire additional enforcement staff. However, as I will now discuss, OPS has not put in place key elements of effective management that would allow it to determine the impact of its evolving enforcement program on pipeline safety. Although OPS has overall performance goals, it has not established specific goals for its enforcement program. According to OPS officials, the agency’s enforcement program is designed to help achieve the agency’s overall performance goals of (1) reducing the number of pipeline accidents by 5 percent annually and (2) reducing the amount of hazardous liquid spills by 6 percent annually. Other agency efforts—including the development of a risk-based approach to finding and addressing significant threats to pipeline safety and of education to prevent excavation-related damage to pipelines—are also designed to help achieve these goals. OPS’s overall performance goals are useful because they identify the end outcomes, or ultimate results, that OPS seeks to achieve through all its efforts. However, OPS has not established performance goals that identify the intermediate outcomes, or direct results, that OPS seeks to achieve through its enforcement program. Intermediate outcomes show progress toward achieving end outcomes. For example, enforcement actions can result in improvements in pipeline operators’ safety performance—an intermediate outcome that can then result in the end outcome of fewer pipeline accidents and spills. OPS is considering establishing a goal to reduce the time it takes the agency to issue final enforcement actions. While such a goal could help OPS improve the management of the enforcement program, it does not reflect the various intermediate outcomes the agency hopes to achieve through enforcement. Without clear goals for the enforcement program that specify intended intermediate outcomes, agency staff and external stakeholders may not be aware of what direct results OPS is seeking to achieve or how enforcement efforts contribute to pipeline safety. OPS has not fully defined its strategy for using enforcement to achieve its overall performance goals. According to OPS officials, the agency’s increased use of civil penalties and corrective action orders reflects a major change in its enforcement strategy. However, although OPS began to implement these changes in 2000, it has not yet developed a policy that defines this new, more aggressive enforcement strategy or describes how the strategy will contribute to the achievement of the agency’s performance goals. In addition, OPS does not have up-to-date, detailed internal guidelines on the use of its enforcement tools that reflect its current strategy. Furthermore, although OPS began enforcing its integrity management standards in 2002 and received greater enforcement authority under the 2002 pipeline safety act, it does not yet have guidelines in place for enforcing these standards or for implementing the new authority provided by the act. According to agency officials, OPS management communicates enforcement priorities and ensures consistency in enforcement decisions through frequent internal meetings and detailed inspection protocols and guidance. Agency officials recognize the need to develop an enforcement policy and up-to-date detailed enforcement guidelines and have been working to do so. To date, the agency has completed an initial set of enforcement guidelines for its operator qualification standards and has developed other draft guidelines. However, because of the complexity of the task, agency officials do not expect that the new enforcement policy and remaining guidelines will be finalized until sometime in 2005. The development of an enforcement policy and guidelines should help define OPS’s enforcement strategy; however, it is not clear whether this effort will link OPS’s enforcement strategy with intermediate outcomes, since agency officials have not established performance goals specifically for their enforcement efforts. We have reported that such a link is important. According to OPS officials, the agency currently uses three performance measures and is considering three additional measures to determine the effectiveness of its enforcement activities and other oversight efforts. (See table 1.) The three current measures provide useful information about the agency’s overall efforts to improve pipeline safety, but do not clearly indicate the effectiveness of OPS’s enforcement strategy because they do not measure the intermediate outcomes of enforcement actions that can contribute to pipeline safety, such as improved compliance. The three measures that OPS is considering could provide more information on the intermediate outcomes of the agency’s enforcement strategy, such as the frequency of repeat violations and the number of repairs made in response to corrective action orders, as well as other aspects of program performance, such as the timeliness of enforcement actions. We have found that agencies that are successful in measuring performance strive to establish measures that demonstrate results, address important aspects of program performance, and provide useful information for decision-making. While OPS’s new measures may produce better information on the performance of its enforcement program than is currently available, OPS has not adopted key practices for achieving these characteristics of successful performance measurement systems: Measures should demonstrate results (outcomes) that are directly linked to program goals. Measures of program results can be used to hold agencies accountable for the performance of their programs and can facilitate congressional oversight. If OPS does not set clear goals that identify the desired results (intermediate outcomes) of enforcement, it may not choose the most appropriate performance measures. OPS officials acknowledge the importance of developing such goals and related measures but emphasize that the diversity of pipeline operations and the complexity of OPS’s regulations make this a challenging task. Measures should address important aspects of program performance and take priorities into account. An agency official told us that a key factor in choosing final measures would be the availability of supporting data. However, the most essential measures may require the development of new data. For example, OPS has developed databases that will track the status of safety issues identified in integrity management and operator qualification inspections, but it cannot centrally track the status of safety issues identified in enforcing its minimum safety standards. Agency officials told us that they are considering how to add this capability as part of an effort to modernize and integrate their inspection and enforcement databases. Measures should provide useful information for decision-making, including adjusting policies and priorities. OPS uses its current measures of enforcement performance in a number of ways, including monitoring pipeline operators’ safety performance and planning inspections. While these uses are important, they are of limited help to OPS in making decisions about its enforcement strategy. OPS has acknowledged that it has not used performance measurement information in making decisions about its enforcement strategy. OPS has made progress in this area by identifying possible new measures of enforcement results (outcomes) and other aspects of program performance, such as indicators of the timeliness of enforcement actions, that may prove more useful for managing the enforcement program. In 2000, in response to criticism that its enforcement activities were weak and ineffective, OPS increased both the number and the size of the civil monetary penalties it assessed. Pipeline safety stakeholders expressed differing opinions about whether OPS’s civil penalties are effective in deterring noncompliance with pipeline safety regulations. OPS assessed more civil penalties during the past 4 years under its current “tough but fair” enforcement approach than it did in the previous 5 years, when it took a more lenient enforcement approach. (See fig. 2.) From 2000 through 2003, OPS assessed 88 civil penalties (22 per year on average) compared with 70 civil penalties from 1995 through 1999 (about 14 per year on average). For the first 5 months of 2004, OPS proposed 38 civil penalties. While the recent increase in the number and the size of civil penalties may reflect OPS’s new “tough but fair” enforcement approach, other factors, such as more severe violations, may be contributing to the increase as well. Overall, OPS does not use civil penalties extensively. Civil penalties represent about 14 percent (216 out of 1,530) of all enforcement actions taken over the past 10 years. OPS makes more extensive use of other types of enforcement actions that require pipeline operators to fix unsafe conditions and improve inadequate procedures, among other things. In contrast, civil penalties represent monetary sanctions for violating safety regulations but do not require safety improvements. OPS may increase its use of civil penalties as it begins to use them to a greater degree for violations of its integrity management standards. The average size of the civil penalties has increased. For example, from 1995 through 1999, the average assessed civil penalty was about $18,000. From 2000 through 2003, the average assessed civil penalty increased by 62 percent to about $29,000. Assessed penalty amounts ranged from $500 to $400,000. In some instances, OPS reduces proposed civil penalties when it issues its final order. We found that penalties were reduced 31 percent of the time during the 10-year period covered by our work (66 of 216 instances). These penalties were reduced by about 37 percent (from a total of $2.8 million to $1.7 million). The dollar difference between the proposed and the assessed penalties would be over three times as large had our analysis included the extraordinarily large penalty for the Bellingham, Washington, incident. For this case, OPS proposed a $3.05 million penalty and had assessed $250,000 as of May 2004. If we include this penalty, then over this period OPS reduced total proposed penalties by about two-thirds, from about $5.8 million to about $2 million. OPS’s database does not provide summary information on why penalties are reduced. According to an OPS official, the agency reduces penalties when an operator presents evidence that the OPS inspector’s finding is weak or wrong or when the pipeline’s ownership changes during the period between the proposed and the assessed penalty. It was not practical for us to gather information on a large number of penalties that were reduced, but we did review several to determine the reasons for the reductions. OPS reduced one of the civil penalties we reviewed because the operator provided evidence that OPS inspectors had miscounted the number of pipeline valves that OPS said the operator had not inspected. Since the violation was not as severe as the OPS inspector had stated, OPS reduced the proposed penalty from $177,000 to $67,000. Of the 216 penalties that OPS assessed from 1994 through 2003, pipeline operators paid the full amount 93 percent of the time (200 instances) and a reduced amount 1 percent of the time (2 instances). (See fig. 3.) Fourteen penalties (6 percent) remain unpaid, totaling about $837,000 (or 18 percent of penalty amounts). In two instances, operators paid reduced amounts. We followed up on one of these assessed penalties. In this case, the operator requested that OPS reconsider the assessed civil penalty and OPS reduced it from $5,000 to $3,000 because the operator had a history of cooperation and OPS wanted to encourage future cooperation. For the 14 unpaid penalties, neither FAA’s nor OPS’s data show why the penalties have not been collected. We expect to present a fuller discussion of the reasons for these unpaid penalties and OPS’s and FAA’s management controls over the collection of penalties when we report to this and other committees next month. Although OPS has increased both the number and the size of the civil penalties it has imposed, the effect of this change on deterring noncompliance with safety regulations, if any, is not clear. The stakeholders we spoke with expressed differing views on whether the civil penalties deter noncompliance. The pipeline industry officials we contacted believed that, to a certain extent, OPS’s civil penalties encourage pipeline operators to comply with pipeline safety regulations because they view all of OPS’s enforcement actions as deterrents to noncompliance. However, some industry officials said that OPS’s enforcement actions are not their primary motivation for safety. Instead, they said that pipeline operators are motivated to operate safely because they need to avoid any type of accident, incident, or OPS enforcement action that impedes the flow of products through the pipeline and hinders their ability to provide good service to their customers. Pipeline industry officials also said that they want to operate safely and avoid pipeline accidents because accidents generate negative publicity and may result in costly private litigation against the operator. Most of the interstate agents, representatives of their associations, and insurance company officials expressed views similar to those of the pipeline industry officials, saying that they believe civil penalties deter operators’ noncompliance with regulations to a certain extent. However, a few disagreed with this point of view. For example, the state agency representatives and a local government official said that OPS’s civil penalties are too small to be deterrents. Pipeline safety advocacy groups that we talked to also said that the civil penalty amounts OPS imposes are too small to have any deterrent effect on pipeline operators. As discussed earlier, for 2000 through 2003, the average assessed penalty was about $29,000. According to economic literature on deterrence, pipeline operators may be deterred if they expect a sanction, such as a civil penalty, to exceed any benefits of noncompliance. Such benefits could, in some cases, be lower operating costs. The literature also recognizes that the negative consequences of noncompliance—such as those stemming from lawsuits, bad publicity, and the value of the product lost from accidents—can deter noncompliance along with regulatory agency oversight. Thus, for example, the expected costs of a legal settlement could overshadow the lower operating costs expected from noncompliance, and noncompliance might be deterred. Mr. Chairman, this concludes my prepared statement. We expect to report more fully on these and other issues when we complete our work next month. We also anticipate making recommendations to improve OPS’s ability to demonstrate the effectiveness of its enforcement strategy and to improve OPS’s and FAA’s management controls over the collection of civil penalties. I would be pleased to respond to any questions that you or Members of the Subcommittee might have. For information on this testimony, please contact Katherine Siggerud at (202) 512-2834 or siggerudk@gao.gov. Individuals making key contributions to this testimony are Jennifer Clayborne, Judy Guilliams- Tapia, Bonnie Pignatiello Leer, Gail Marnik, James Ratzenberger, and Gregory Wilmoth. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Interstate pipelines carrying natural gas and hazardous liquids (such as petroleum products) are safer to the public than other modes of freight transportation. The Office of Pipeline Safety (OPS), the federal agency that administers the national regulatory program to ensure safe pipeline transportation, has been undertaking a broad range of activities to make pipeline transportation safer. However, the number of serious accidents--those involving deaths, injuries, and property damage of $50,000 or more--has not fallen. Among other things, OPS takes enforcement action against pipeline operators when safety problems are found. OPS has several enforcement tools to require the correction of safety violations. It can also assess monetary sanctions (civil penalties). This testimony is based on ongoing work for the House Committee on Transportation and Infrastructure and for other committees, as required by the Pipeline Safety Improvement Act of 2002. The testimony provides preliminary results on (1) the effectiveness of OPS's enforcement strategy and (2) OPS's assessment of civil penalties. The effectiveness of OPS's enforcement strategy cannot be determined because the agency has not incorporated three key elements of effective program management--clear program goals, a well-defined strategy for achieving goals, and performance measures that are linked to program goals. Without these key elements, the agency cannot determine whether recent and planned changes in its strategy will have the desired effects on pipeline safety. Over the past several years, OPS has focused primarily on other efforts--such as developing a new risk-based regulatory approach--that it believes will change the safety culture of the industry. But, OPS also became more aggressive in enforcing its regulations, and now plans to further strengthen the management of its enforcement program. In particular, OPS is developing an enforcement policy that will help define its enforcement strategy and has taken initial steps toward identifying new performance measures. However, OPS does not plan to finalize the policy until 2005 and has not adopted key practices for achieving successful performance measurement systems, such as linking measures to goals. Incorporation of Key Program Management Elements into OPS's Enforcement Strategy OPS increased both the number and the size of the civil penalties it assessed against pipeline operators over the last 4 years (2000-2003) following a decision to be "tough but fair" in assessing penalties. OPS assessed an average of 22 penalties per year during this period, compared with an average of 14 per year for the previous 5 years (1995-1999), a period of more lenient "partnering" with industry. In addition, the average penalty increased from $18,000 to $29,000 over the two periods. About 94 percent of the 216 penalties levied from 1994 through 2003 have been paid. The civil penalty is one of several actions OPS can take when it finds a violation, and these penalties represent about 14 percent of all enforcement actions over the past 10 years. While OPS has increased the number and the size of its civil penalties, stakeholders--including industry, state, and insurance company officials and public advocacy groups--expressed differing views on whether these penalties deter noncompliance with safety regulations. Some, such as pipeline operators, thought that any penalty was a deterrent if it kept the pipeline operator in the public eye, while others, such as safety advocates, told us that the penalties were too small to be effective sanctions.
In our January 2016 report on data standards we noted that by the end of August 2015 OMB and Treasury had issued a list of 57 standardized data elements. The DATA Act requires that these data standards—to the extent reasonable and practicable—incorporate widely accepted common data elements, such as those developed by international standards- setting bodies. Incorporating leading practices from international standards organizations offers one way to help reduce uncertainty and confusion when reporting and interpreting data standards. Well-crafted data element definitions are needed to ensure that a data standard produces consistent and comparable information. In our January 2016 report, we noted that these standardized data element definitions largely followed leading practices. We compared the standardized data elements against leading practices promulgated by the International Organization for Standardization (ISO) and found that 12 of the 57 DATA Act data element definitions issued in August 2015 met all of the ISO leading practices and each of the remaining 45 definitions met no fewer than 9 leading practices, meaning that even the lowest-rated data elements in our review adhered to almost 70 percent of the ISO leading practices. While this demonstrates good progress, it will be important to clarify data elements that did not adhere to leading practices to reduce the risk that agencies inconsistently apply the definitions. Imprecise or ambiguous data element definitions may allow for more than one interpretation by agency staff collecting, compiling, and reporting on these data and thus could result in inconsistent and potentially misleading reporting when aggregated across government or compared between agencies. For example, OMB and Treasury issued four data elements that collectively represent the concept of Primary Place of Performance. The location or place of performance of specific grant, contract, or other federal spending has long been a data element collected by agencies. However, agencies have taken varied approaches to reporting place of performance information—sometimes describing where the funded activity takes place, sometimes the recipient of the product or activity, or sometimes the location of the administrative headquarters of the provider or a sub-entity. We reported that although the definitions standardize some of the mechanics of what Primary Place of Performance covers, such as city, county, state, and ZIP+4 codes, the definition still leaves room for differing interpretations that could result in agencies capturing and reporting this information differently. In another example highlighted in our January report, we noted that OMB and Treasury standardized the definition of Program Activity as required by the DATA Act. This definition adhered to all 13 ISO leading practices, but we still had concerns regarding the use of this data element. Specifically, OMB’s and Treasury’s guidance on Program Activity acknowledged that program activities can change from one year to the next and that Program Activity does not necessarily match “programs” as specified in GPRAMA or the Catalog of Federal Domestic Assistance. In responding to this guidance, officials at the U.S. Department of Agriculture said that when program activities change it is difficult to compare spending over time, underscoring the need for more guidance to ensure that the public can accurately interpret Program Activity compared to the other common representations of federal programs. We also raised concerns about OMB’s efforts to merge DATA Act requirements with certain GPRAMA requirements. GPRAMA requires the Office of Management and Budget (OMB) to make information available about each federal program. A stated purpose of the DATA Act is to link federal contract, loan, and grant spending information to federal programs to allow taxpayers and policy makers to track federal spending. However, we have reported that initial efforts to develop the program inventory resulted in inconsistent definitions and significant information gaps. As a result, the inventory does not provide useful information for decision making. As we have previously testified before this committee, OMB needs to accelerate efforts to determine how best to merge DATA Act purposes and requirements with the GPRAMA requirement to produce a federal inventory of programs that meets congressional expectations that federal agencies provide useful and valid information for decision making on all federal government programs. To help address this issue, we have initiated new work to develop a framework that can inform OMB’s and agencies’ future efforts to develop a viable and useful federal program inventory. To help ensure that agencies report consistent and comparable data, we recommended that OMB and Treasury provide agencies with additional guidance that addresses potential clarity, consistency, and quality issues with identified data element definitions. While OMB generally concurred with our recommendation, it took the position that the requirement to standardize data elements applied only to the 11 account level data elements standardized in May 2015, and efforts to standardize the remaining 46 data elements were conducted pursuant to a larger policy goal to improve the quality of federal spending data reported on USAspending.gov. However, for reasons put forth in our January 2016 report, we concluded that both the statutory language and the purposes of the DATA Act support the interpretation that OMB and Treasury are required to establish data standards for award and awardee information in addition to the account level information. Without data standards for award and awardee information, the inconsistent and incomparable reporting that Congress sought to remedy through the DATA Act will continue. In December 2015, OMB and Treasury posted a data dictionary on the Federal Spending Transparency website that provides additional information about how each data element is defined, the type of data to be reported (i.e., integer, alphanumeric, numeric), and how data elements relate to each other. This data dictionary also includes new data elements, which OMB said encompass additional detail required for or consistent with DATA Act reporting, such as finer breakdowns of reported values for Obligations and Outlays. Although this new guidance improves the clarity of the data definitions by providing additional context and detail, we are still concerned about both the lack of clarity with certain data definitions and the addition of new data elements that agencies are required to report. In addition, OMB and Treasury still have not addressed data quality issues with some data elements. Our prior work identified data quality issues with certain data elements, such as Award Description, which OMB and Treasury defined as “a brief description of the purpose of the award.” In our previous work on the data quality of USAspending.gov, we identified challenges with this data element, citing the wide range of information that agencies report as the description or purpose. Agencies routinely provided information for this data element using shorthand descriptions, acronyms, or terminology that could only be understood by officials at the agency that made the award. As we reported in 2010 and 2014, this lack of clarity can be traced, in part, to guidance which is unclear or leaves room for multiple interpretations. The lack of basic clarity for certain data elements could make it difficult for people outside the agency to understand the data and would limit the ability to meaningfully aggregate or compare these data across the federal government. We made recommendations to OMB in 2010 and 2014 and to Treasury in 2014 to improve the accuracy and completeness of Award Description, which have yet to be addressed. At that time, Treasury officials neither agreed nor disagreed with our recommendations, while OMB staff generally agreed with the recommendations stating that they were consistent with actions required under the DATA Act. OMB and Treasury issued initial guidance to federal agencies in May 2015 on meeting the reporting requirements of the Federal Funding Accountability and Transparency Act of 2006 (FFATA), as amended by the DATA Act, in accordance with the new data standards. OMB and Treasury also issued a DATA Act Implementation Playbook and subsequent guidance which, among other things, specified eight key steps for agencies to fulfill their DATA Act requirements. In our January 2016 report we raised concerns about the completeness and timeliness of the technical guidance OMB and Treasury developed to facilitate agency data submission. Treasury has issued several iterative versions of the technical schema that describes the standard format for reporting data elements including their description, type, and length, but has not made available a finalized schema that would provide agencies with a stable base from which to develop data submission plans. OMB’s and Treasury’s DATA Act Implementation Playbook outlines eight specific steps and timelines for implementing the DATA Act at the agency level. However, the finalized guidance that would help agencies carry out these steps has not been provided in time to coincide with when agencies were expected to carry out key activities outlined in the DATA Act Implementation Playbook. Given the importance of having a largely stable schema to serve as the foundation for developing subsequent technical processes at the agency level, any significant delay in releasing finalized guidance will likely delay implementation of the act. Accordingly, we recommended that OMB and Treasury take steps to align the release of finalized technical guidance, including the DATA Act schema and broker, to the implementation time frames specified in the DATA Act Implementation Playbook. Treasury officials generally concurred with our recommendation, noting that they recognize the importance of providing agencies with timely technical guidance and reporting submission specifications. Treasury issued its updated schema, now referred to as the DATA Act Information Model Schema version 0.7 on December 31, 2015, to include schema diagrams depicting how the data elements fit together in context. This new version builds upon previous work and incorporates additional A-11 data elements to the schema. In addition, it increases the level of detail required that we believe may have consequences for timely implementation by federal agencies. Finally, while many of these additional data elements are derivatives of data elements required under FFATA, A-11 or new data elements required under the DATA Act, it could substantially increase the amount of data agencies need to submit. Although schema version 0.7 provides additional context for reporting using the new data standards, we continue to have concerns about the evolving nature of the technical specifications provided to agencies. For example, the previous version of the schema provided information on the allowed values that could be entered for each data element, such as DC for the District of Columbia. Version 0.7 of the schema removed information on allowed values, which could lead to inconsistent and incomparable reporting. However, Treasury officials told us that they have developed other methods to enforce these values. In responding to a draft of this statement, Treasury officials told us they provided final draft technical guidance to agencies for comment. In addition, they provided a copy of this guidance to us which we will review in future work. OMB and Treasury have issued data standards and provided guidance and feedback to federal agencies on their DATA Act implementation plans. However, our ongoing work in this area indicates that challenges remain and will need to be addressed to successfully implement the DATA Act government-wide. In May 2015, OMB issued Memorandum M- 15-12, which among other things, directed agencies to develop implementation plans. OMB issued additional guidance to the agencies detailing what should be included in their implementation plans, and asking agencies to describe any potential difficulties or foreseeable challenges, such as competing statutory, regulatory, or policy priorities, which could hinder their implementation of the DATA Act. This guidance also encouraged agencies to provide suggestions to mitigate the challenges they foresee, help to manage costs, and support investment planning. Our ongoing review of the DATA Act implementation plans from the 24 Chief Financial Officers Act agencies as well as 18 smaller federal agencies, dated between August 2015 and January 2016, provides insight into the challenges agencies face as well as the mitigation strategies they suggest to address them. Based on our preliminary results, we believe the challenges and mitigation strategies reported provide important insight as to the level of effort, communication, collaboration, and resources needed to successfully implement the DATA Act government-wide. Based on our preliminary results from our ongoing review of agency implementation plans, we identified seven overarching categories of challenges reported by agencies to effectively and efficiently implement the DATA Act. (See table 1.) The preliminary results of our review of the 42 agency implementation plans we received indicate that 31 agencies reported specific challenges some of which may overlap with multiple categories. Figure 1 shows that agencies reported challenges, most frequently in the following categories: competing priorities, resources, and systems integration. Competing priorities: Of the 31 agencies reporting challenges, 23 reported competing statutory, regulatory, or policy priorities which could potentially affect DATA Act implementation. One competing priority certain agencies reported is meeting requirements of OMB Circular No. A-11, which provides agencies with guidance on the budget process, including how to prepare and submit required materials for budget preparation and execution. For example, one agency noted that the class” and “program activity” reporting create competing priorities both for the agency’s software vendors and for the agency’s internal resources. The agency noted that staff with knowledge needed to understand and comment on new DATA Act data element definitions are the same staff different timelines for OMB Circular No. A‐11 requirements on “object required to work on the new Circular No. A‐11 reporting requirements (e.g., technical revisions and clarifications). The agency added that its ability to engage effectively on the DATA Act requirements while working to implement the Circular No. A‐11 changes is severely inhibited. Another competing priority some agencies reported is the data requirement set forth in the Federal Acquisition Regulation (FAR). Specifically, in October 2014 the FAR was amended to standardize the format of the Procurement Instrument Identifier (PIID) that must be in effect for new awards issued after October 2017. The PIID must be used to identify all solicitation and contract actions, and ensure that each PIID used is unique government-wide for at least 20 years from the date of the contract award. Some agencies reported they were concerned about the amount of effort involved in also implementing the PIID for the DATA Act. For example, one agency noted that it had implemented a standard PIID and developed processes and systems to handle the new identifiers to meet the FAR requirements, but the extent of any changes necessary to implement the PIID for the DATA Act, which also requires a unique identifier, is unknown. Another agency noted that this initiative and other agency initiatives will compete for many of the same resources, including subject matter experts. Resources: Limited resources are another concern reported by 23 agencies in their implementation plans. Agencies frequently identified funding and human resources as needs for efficient and effective implementation. For example, one agency noted that the execution of its implementation plan is highly dependent on receiving the requisite funding and human resources as estimated in the plan, and the agency added that delays in securing additional resources for fiscal years 2016, 2017, and beyond will have a direct effect on its DATA Act implementation and schedule. Similarly, another agency pointed out that having insufficient funds for contractor support, managing the overall implementation, testing interfaces between systems, and addressing data mapping issues will pose a challenge for its entities and systems. Some agencies also reported that human resources are key to successful DATA Act implementation. One agency reported it is concerned about the adequacy of its human resources, which could impair its ability to go beyond basic compliance with the DATA Act and added that this may prevent the agency from being able to address increased public inquiry and scrutiny of their data and operations. Specifically, the agency reported that resources are required for project management, data analysis, analytic expertise, data management, and training for financial inquiry and analysis. The need for subject matter experts, such as data architects, was raised as a challenge by another agency. Furthermore, one agency noted that the need to share limited resources for DATA Act implementation with other operational activities presents a significant challenge for their implementation strategy. Systems integration: Systems integration is another pervasive challenge reported by 23 agencies in their implementation plans. Some agencies noted concerns about the ability of their systems to obtain and easily submit to Treasury all the data elements needed to implement the DATA Act, including the requirement to establish a unique award ID. For example, one agency reported that it does not have a systematic link to pull data from multiple systems by a unique award ID and it does not have an automated grants management system because the agency noted that it reports grants data manually using spreadsheets. This agency noted that it needs to replace its financial system and modify supporting systems to fully comply with the DATA Act. Another agency noted that five of the required data elements are not included in its procurement and financial assistance system. As a result, the agency noted that it will have to modify its system’s software to include these elements in order to comply with the DATA Act. These statements from agency implementation plans indicate that, given the vast number and complexity of systems government-wide that are potentially involved in DATA Act implementation efforts, agencies may face a variety of challenges related to systems integration. Guidance: In their implementation plans, 19 agencies reported the lack of adequate guidance as a challenge to implementing the DATA Act. Several agencies noted that they cannot fully determine how their policies, business processes, and systems should be modified to support DATA Act reporting because in their view, OMB and Treasury have not yet issued complete, detailed, finalized DATA Act implementation guidance on required data elements, technical schema, and other key policies. According to these agencies, issuance of such guidance is part of the critical path to meeting their implementation goals. For example, one agency noted that its implementation plan is highly dependent upon Treasury’s development of the technical schema for DATA Act implementation. The agency also reported that any delays or changes to Treasury requirements in the technical schema will significantly affect the agency’s solution design, development and testing schedule, and cost estimate. Another agency included a list of unanswered questions in its implementation plan that it wanted OMB to address in guidance related to time frames, various technical requirements, level of reporting, linking systems, and tracking and reconciling data. Dependencies: Eighteen agencies reported in their implementation plans that the completion of certain implementation activities is subject to actions or issues that must be addressed by OMB and Treasury in order for the agencies to effectively implement the DATA Act. Some agencies also noted that they were relying on their shared service provider’s implementation of the DATA Act for agency compliance with the act. For example, one agency noted that it will rely on its shared service provider to enhance its system, but funding may be restricted to enhance a system that the agency does not own. Another key dependency noted in one agency’s implementation plan is the need for Treasury to provide detailed information or requirements regarding the data formats, validation module, error correction and resubmission process, and testing schedule. Without this information, the agency noted that it cannot provide complete cost estimates, determine changes to system and business processes, and determine the level of effort and resources required to develop the data submissions. Time frames: In their implementation plans, 16 agencies identified time constraints as a challenge in implementing the DATA Act. For example, one agency noted that the time frame to get everything done indicated in the original guidance coupled with the complexity of the known issues makes it highly unlikely that its DATA Act initiative will stay on target. The agency also noted that there is no mitigation strategy for meeting the expected deadline on all aspects of the reporting because even if all tasks were worked concurrently, the schedule is not attainable for the agency. Another agency noted that the current reporting of award and awardee information to USASpending.gov is in accordance with FFATA. This information is reported within 3 days after the award was made for contracts and bi-monthly for financial assistance, while the DATA Act requires reporting of account-level information monthly where practicable but not less than quarterly. This agency noted that linking financial information with nonfinancial information that is reported with a different frequency creates a “moving target” and poses a challenge to linking the financial and nonfinancial data. Other challenges: Agencies reported several other challenges in their implementation plans less frequently than the ones listed above. For example, a few agencies reported challenges related to the overall policies, procedures, and processes such as governance, risk management, and training. Some agencies also noted challenges related to the level of detail required for information and data required by the DATA Act that differ from existing financial reporting processes, including the ability to reconcile information and data to sources and official records. Finally, agencies reported concern with the quality and integrity of data in underlying agency systems and its effect on DATA Act reporting. Our preliminary results indicate that 26 agencies identified mitigation strategies to address challenges as suggested by OMB guidance. Some strategies discussed in the agency implementation plans address multiple challenges. Below are some of the more frequently cited and cross cutting mitigation strategies suggested by agencies in their implementation plans to address specific areas of concern. Communication and information sharing: In their implementation plans, some agencies reported the need for frequent communication with OMB, Treasury, shared service providers, vendors, and other agencies in order to keep one another updated on their implementation activities, as well as to share best practices and lessons learned throughout the process. Agencies also suggested that reviewing other agencies’ implementation plans for best practices, common challenges, and solutions would facilitate information sharing. For example, one agency pointed out that, in its view, lines of communication between Treasury and the agencies must be transparent to help ensure the submission of financial data is accurate and the process for submitting it runs smoothly. Another agency noted that it believes collaboration with other agencies to share common concerns will be beneficial. Monitoring and development of guidance: In their implementation plans, agencies also discussed plans to closely monitor DATA Act implementation guidance in order to adapt agency implementation strategies as the guidance changes. For example, one agency noted that it will monitor and evaluate the release of DATA Act guidance as well as data elements and technical schema in order to identify the effect on the project. Another agency noted that it plans to use its established governance structure to immediately facilitate solutions when additional guidance is provided. Further, some agencies discussed developing guidance and training materials for internal use. For example, one agency noted that it plans to create a common set of tools by establishing a “project management toolkit” for agency leaders to ensure DATA Act implementation needs are addressed efficiently and effectively. Leveraging existing resources: To effectively use limited resources, some agencies noted in their implementation plans the importance of leveraging available systems and human resources by reassigning staff, using subject matter experts, and multitasking when possible to maximize efficiency. For example, one agency reported that it will leverage senior executive support to make the DATA Act implementation a priority and see what resources might be available in the “least expected places,” as well as work on tasks concurrently. In addition, agencies reported the need to update systems to encompass more data elements and streamline reporting. For example, one agency reported that it plans to designate a Chief Data Officer to oversee a multi-tiered review of agency data and implement solutions for consolidating agency data. Overall our preliminary work indicates that agency implementation plans contain valuable information on a variety of challenges in implementing the DATA Act, including a lack of funding, inadequate guidance, tight time frames, competing priorities, and system integration issues. Agencies reported working closely with internal and external stakeholders to address these challenges as effectively as possible, but also reported that additional support from OMB and Treasury is needed for successful implementation of the DATA Act. In the report that is being issued today, we identified several design challenges involving the development of the Section 5 Pilot, which the DATA Act required OMB to establish. OMB created a two-part pilot that focused on two communities: federal grants and federal contracts (procurement). For grants, OMB designated the Department of Health and Human Services (HHS) to serve as its executing agent. On the contracting side, OMB’s Office of Federal Procurement Policy (OFPP) is responsible for leading the procurement portion working with the General Services Administration’s 18F and others. OMB launched a number of pilot-related initiatives in May 2015 and expects to continue activities until at least May 2017. As the executing agent for the grants portion of the pilot, HHS has developed six “test models” that evaluate a variety of approaches to potentially reduce grantee reporting burden, including the development of a data repository for identifying common data elements and forms intended to eliminate duplicative reporting on Consolidated Federal Financial Reports. Detailed descriptions of the objectives and methodologies of each of these six test models can be found in our full report. The DATA Act identifies three specific requirements related to the Section 5 Pilot’s design. Specifically, the pilot must: (1) include data collected during a 12-month reporting cycle; (2) include a diverse group of recipients; and (3) include a combination of federal contracts, grants, and subawards with an aggregate value between $1 billion and $2 billion. We found that if HHS effectively implements its stated plans for the grants portion of the Section 5 Pilot, it is likely that it will address these three requirements. HHS officials told us that they are still determining how to meet the requirement for total award value because they want to ensure the pool of pilot participants is as diverse and large as possible while still being legally compliant. In addition, we found that the design of the grants portion of the pilot partially adhered to leading practices of pilot design. We assessed the designs of the grants and procurement portions of the pilot against leading practices that we identified from our prior work and other sources regarding design of a pilot project (see textbox). Leading Practices for Effective Pilot Design Establish well-defined, appropriate, clear, and measurable objectives. Clearly articulate an assessment methodology and data gathering strategy that addresses all components of the pilot program and includes key features of a sound plan. Identify criteria or standards for identifying lessons about the pilot to inform decisions about scalability and whether, how, and when to integrate pilot activities into overall efforts. Develop a detailed data-analysis plan to track the pilot program’s implementation and performance and evaluate the final results of the project and draw conclusions on whether, how, and when to integrate pilot activities into overall efforts. Ensure appropriate two-way stakeholder communication and input at all stages of the pilot project, including design, implementation, data gathering, and assessment. Our analysis found that five of the six grants test models had clear and measurable objectives. In contrast, five of the six test models did not clearly articulate an assessment methodology. Only one test model had specific details about how potential findings could be scalable to be generalized beyond the context of the pilot. Furthermore, five of six grants test models provided some level of detail on how HHS plans to evaluate pilot results. Finally, HHS has engaged in two-way stakeholder communications for all six test models and has taken a number of actions to obtain input from grant recipients. We provided our assessment of the design of the grants portion of the pilot to HHS officials, who told us that they generally concurred with our analysis and had updated their plan to address many of our concerns. However, at the time we were conducting our audit work, HHS officials said they could not provide us with the revised plan because it was under review by OMB. We have since received an updated version of the HHS plan for implementing the grants portion of the pilot. We plan to fully assess its contents and the extent to which it addresses our concerns in a forthcoming review that will focus on the pilot’s implementation. The procurement portion of the pilot will focus on examining the feasibility of centralizing the reporting of certified payroll. OFPP staff responsible for this portion of the pilot told us they decided to focus on certified payroll reporting because of feedback they received from the procurement community. Toward this end, the Chief Acquisitions Officers Council has entered into an interagency agreement with 18F to design a prototype system that would centralize certified payroll data, which it expects to test in summer 2016. This narrow focus on certified payroll stands in contrast to the grants portion of the pilot, where HHS will explore several areas in which grantee reporting burden could be reduced. Based on our review, it is unclear how the design of the procurement portion will address the requirements set forth by section 5 of the act. As a result of design and development delays, OFPP does not expect to be able to collect meaningful and useful data for the procurement portion of the pilot until summer 2016. This is after May 9, 2016, the date by which data collection must begin to allow for a 12-month reporting cycle before the required termination date. Further, we found that OFPP does not have a detailed plan for selecting participants that will result in a diverse group of recipients with awards from multiple programs and agencies. While there is some documentation related to OFPP’s approach for selecting participants, they do not clearly convey how the procurement portion of the pilot would specifically contribute to meeting the act’s requirement regarding diversity of participants. However, there is some documentation related to OFPP’s approach for selecting participants in their draft procurement pilot plan and in a Federal Register notice issued on November 24, 2015. For example, the draft plan identifies the Federal Procurement Data System-Next Generation as the mechanism that will be used for identifying which contracts and contractors to include in the pilot. OFPP staff also told us that they intend to cover both large and small industries. While valuable information, it does not clearly convey how the procurement portion of the pilot would specifically contribute to meeting the act’s requirement regarding diversity of participants. In our report being issued today, we recommend that OMB determine and clearly document how the procurement pilot will contribute to these requirements. OMB did not offer a view on this recommendation. In addition, we found that the design of the procurement portion of the pilot did not reflect leading practices for effective pilot design which would help OMB develop effective recommendations to simplify reporting for contractors. OFPP staff told us that certified payroll reporting was selected as the subject of the pilot because they learned that it was a particular pain point for contractors as a result of various outreach efforts including a discovery process conducted by 18F to interview contractors, contracting officers, business owners, government employees, and subject-matter experts. However, the draft procurement plan does not provide specifics regarding the particular objectives and hypothesis that will be tested by the pilot. OFPP staff stated that, consistent with their view of agile practices, they intend to further refine their approach as 18F develops its prototype and additional work proceeds with the pilot. In addition, the draft plan did not address the issue of scalability necessary to produce recommendations that could be applied government-wide, nor did it indicate how data will be evaluated to draw conclusions. To enable the development of effective recommendations for reducing reporting burden for contractors, our report contains a recommendation that OMB ensure that the procurement portion of the pilot reflects leading practices for pilot design. OMB did not did not offer a view on this recommendation. In conclusion, almost 2 years into the DATA Act’s implementation, we are faced with a mixed picture. Given its government-wide scope and complexity, effective implementation of the act requires OMB, Treasury, and federal agencies to address a range of complex policy and technical issues. Although progress has been made in several areas, we have identified challenges related to the standardization of data element definitions and the development of a technical schema that, if not addressed, could lead to inconsistent reporting. In their implementation plans, federal agencies have recognized these and other areas of concern including a lack of funding, inadequate guidance, tight time frames, competing priorities, and system integration issues. Finally, although OMB appears to be on track with the design of the grants portion of the Section 5 Pilot, we are concerned that the design of the procurement portion of the pilot could hinder further effective implementation. Chairmen Meadows and Hurd, Ranking Members Connolly and Kelly, and Members of the Subcommittees, this concludes my prepared statement. I would be pleased to respond to any questions you may have. Questions about this testimony can be directed to Michelle A. Sager, (202) 512-6806 or sagerm@gao.gov. Questions about agencies’ DATA Act implementation plans can be directed to Paula Rascona, (202) 512- 9816 or rasconap@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contacts named above, Gary Engel (Managing Director); J. Christopher Mihm (Managing Director); Peter Del Toro (Assistant Director); Michael LaForge (Assistant Director); Kathleen Drennan; Shirley Hwang; Carroll Warfield, Jr.; Aaron Colsher; Charles Jones; Thomas Hackney; and Laura Pacheco made major contributions to this statement. Other key contributors include Mark Canter; Jenny Chanley; Robert Gebhart; Donna Miller; Diane Morris; Carl Ramirez; Andrew J. Stephens; and James Sweetman, Jr. Other members of GAO’s DATA Act Working Group also contributed to the development of this statement. Recommendations The Director of OMB, in collaboration with the members of the Government Accountability and Transparency Board, should develop a plan to implement comprehensive transparency reform, including a long-term timeline and requirements for data standards, such as establishing a uniform award identification system across the federal government. 1. To improve the completeness and accuracy of data submissions to the USASpending.gov website, the Director of OMB, in collaboration with Treasury’s Fiscal Service, should clarify guidance on (1) agency responsibilities for reporting awards funded by non-annual appropriations; (2) the applicability of USAspending.gov reporting requirements to non-classified awards associated with intelligence operations; (3) the requirement that award titles describe the award’s purpose (consistent with our prior recommendation); and (4) agency maintenance of authoritative records adequate to verify the accuracy of required data reported for use by USAspending.gov. Implementation status recommendation and expect information on authoritative data sources to be included in final DATA Act technical guidance to be made available in late spring 2016. 2. To improve the completeness and accuracy of data submissions to the USASpending.gov website, the Director of OMB, in collaboration with Treasury’s Fiscal Service, should develop and implement a government-wide oversight process to regularly assess the consistency of information reported by federal agencies to the website other than the award amount. 1. To ensure that federal program spending data are provided to the public in a transparent, useful, and timely manner, the Director of OMB should accelerate efforts to determine how best to merge DATA Act purposes and requirements with the GPRAMA requirement to produce a federal program inventory. 2. To ensure that the integrity of data standards is maintained over time, the Director of OMB, in collaboration with the Secretary of the Treasury, should establish a set of clear policies and processes for developing and maintaining data standards that are consistent with leading practices for data governance. Open. As part of their DATA Act implementation efforts, OMB and Treasury staff told us that they have identified authoritative sources for data and are developing validation rules for spending information to be reported under the DATA Act. In addition, the inspector general community is working on standard audit methodologies to verify the accuracy and completeness of agency reporting. OMB and Treasury staff reiterated that the ultimate responsibility for the quality of data lies with the agencies. However, Treasury’s broker service will provide an additional set of validation rules to further improve the quality of data submitted to USAspending.gov. Open. OMB staff told us that identifying “programs” for the purposes of DATA Act reporting would not be completed until after May 2017. However, they said they have convened a working group to develop and vet a set of options to establish a government-wide definition for program that is meaningful across multiple communities and contexts (such as budget, contracting, and grants). Open. A Treasury official told us that they are in the process of drafting recommendations for a data governance process that they expect to present to the DATA Act Executive Steering Committee with the goal of completing a process in June 2016 or as soon as practical. concerns are addressed as implementation efforts continue, the Director of OMB, in collaboration with the Secretary of the Treasury, should build on existing efforts and put in place policies and procedures to foster ongoing and effective two-way dialogue with stakeholders including timely and substantive responses to feedback received on the Federal Spending Transparency GitHub website. Implementation status continuing engagement with federal and nonfederal stakeholders through presentations at conferences, roundtable discussions, monthly stakeholder calls, and other venues. They also noted that they have updated the website they use to solicit public comments to improve user access. We have requested documentation of the steps OMB and Treasury have taken to foster ongoing and effective two-way dialogue with stakeholders including timely and substantive responses to feedback. 1. To capitalize on the opportunity created by the DATA Act, the Secretary of the Treasury should reconsider whether certain assets—especially information and documentation such as memoranda of understanding (MOUs) that would help transfer the knowledge gained through the operation of the Recovery Operations Center—could be worth transferring to the Do Not Pay Center Business Center to assist in its mission to reduce improper payments. Additionally, the Secretary should document the decision on whether Treasury transfers additional information and documentation and what factors were considered in this decision. Open. Treasury officials said that all appropriate assets, such as information and documentation from the Recovery Operations Center, have been transferred to the Do Not Pay Center Business Center. We requested a list of these assets as well as information on the process Treasury used to determine which assets to transfer. In commenting on a draft of this statement, Treasury provided some documentation regarding the transfer of assets. We will review this information. 1. To help ensure that agencies report consistent and comparable data on federal spending, we recommend that the Director of OMB, in collaboration with the Secretary of the Treasury, provide agencies with additional guidance to address potential clarity, consistency, or quality issues with the definitions for specific data elements including Award Description and Primary Place of Performance and that they clearly document and communicate these actions to agencies providing this data as well as to end-users. Open. OMB staff told us that they have a draft version of the clarifying guidance out for agency comment and plan to issue this policy guidance in spring 2016. In addition, OMB is planning to provide additional clarity to specific data element definitions by updating current reporting documents to be consistent with the new technical requirements. 2. To ensure that federal agencies are able to meet their reporting requirements and timelines, we recommend that the Director of OMB, in collaboration with the Secretary of the Treasury, take steps to align the release of finalized technical guidance, including the DATA Act schema and broker, to the implementation time frames specified in the DATA Act Implementation Playbook. Open. Treasury officials told us that a stable draft version 1.0 of the reporting submission specification, which is part of the DATA Act Information Model Schema, has been shared with agencies for comment. It will be finalized as soon as possible. Treasury officials said they will finalize the broker once a stable version of 1.0 of the schema is complete. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The DATA Act requires OMB and Treasury to establish government-wide data standards and requires federal agencies to begin reporting financial and payment data in accordance with these standards by May 2017. The act also requires OMB to establish a pilot program to develop recommendations for simplifying federal award reporting for grants and contracts. GAO has an ongoing body of work examining implementation of different aspects of the DATA Act. This statement focuses on the following questions: (1) What efforts have been made to develop government-wide standards and associated technical guidance? (2) What implementation challenges and mitigation strategies have been reported by agencies? (3) How effective is OMB's design of the Section 5 Pilot to reduce recipient reporting burden? The statement also provides an update on OMB's and Treasury's efforts to address GAO's DATA Act recommendations. This statement is primarily based on two GAO reports issued in 2016, as well as ongoing work examining agency DATA Act implementation plans. For its work examining agency implementation plans, GAO reviewed 42 plans to identify reported challenges and mitigation strategies that could affect agency progress toward meeting requirements under the act. GAO also interviewed OMB and Treasury staff to update the status of prior open recommendations pertaining to the act. Treasury had technical comments, which GAO incorporated as appropriate; OMB had none. The Office of Management and Budget (OMB) and the Department of the Treasury (Treasury) have taken some significant steps toward implementing the key provisions of the Digital Accountability and Transparency Act of 2014 (DATA Act); however, several challenges need to be addressed in order to successfully meet the act's requirements. Data standards and technical schema. GAO reported in January 2016, that OMB and Treasury had issued standardized data element definitions for reporting federal spending, but the lack of key guidance has slowed the ability of agencies to operationalize the data standards. Specifically, OMB and Treasury had not yet released guidance to agencies regarding how some data elements should be reported in order to produce consistent and comparable data. For example, Award Description, defined as a brief description of the purpose of the award, led to different interpretations by agencies. GAO also found that Treasury's technical guidance continues to evolve and lacks finality which may impede agency implementation. Treasury has issued several iterative versions of the technical schema that describes the standard format for reporting data elements. Each iteration results in revisions to the technical guidance which may adversely affect the timely implementation of the act. A finalized technical schema would provide agencies with a stable base from which to develop data submission plans and processes. According to Treasury officials, final draft guidance has been provided to agencies for comment. Agency reported implementation challenges and mitigation strategies. GAO's ongoing review of required implementation plans submitted to OMB indicates that federal agencies have identified significant challenges in implementing the DATA Act including competing priorities, resources, systems integration, and guidance. Some agencies also identified strategies to mitigate identified challenges, including effective communication and information sharing and leveraging of existing resources, and reported that additional support from OMB and Treasury is needed for successful implementation. Pilot to reduce recipient reporting burden. OMB has designed a pilot that consists of two parts focused on the grants and procurement communities. The Department of Health and Human Services (HHS) has been designated as the executing agency for the grant portion while OMB leads the procurement portion with support by the General Services Administration's 18F and others. If implemented according to HHS's proposed design, the grants portion of the pilot will likely meet requirements established under the act and will partially reflect leading practices for effective pilot design. However, the procurement portion does not clearly document how it will contribute to meeting the act's requirements nor does it reflect leading practices for effective pilot design. Although progress has been made, GAO has been unable to close any DATA Act recommendations including those calling for establishing a data governance structure, developing a federal program inventory, and expanding two-way dialogue with stakeholders. GAO will continue to monitor OMB's and Treasury's progress to address its recommendations as implementation proceeds.
DOE is responsible for a diverse set of missions, including nuclear security, energy research, and environmental cleanup. These missions are managed by various organizations within DOE and largely carried out by contractors at DOE sites. According to federal budget data, NNSA is the largest organization in DOE, overseeing nuclear weapons, nuclear nonproliferation, and naval reactors missions at its sites. With a $10.5 billion budget in fiscal year 2011—nearly 40 percent of DOE’s total budget—NNSA is responsible for, among other things, providing the United States with safe, secure, and reliable nuclear weapons in the absence of underground nuclear testing and maintaining core competencies in nuclear weapons science, technology, and engineering. Ensuring that the nuclear weapons stockpile remains safe and reliable in the absence of underground nuclear testing is extraordinarily complicated and requires state-of-the-art experimental and computing facilities, as well as the skills of top scientists in the field. Over the past decade, the United States has invested billions of dollars in sustaining the cold war-era stockpile and upgrading the laboratories and, in 2011, the administration announced plans to request $88 billion from Congress over the next decade to operate and modernize the nuclear security enterprise and ensure that base scientific, technical, and engineering capabilities are sufficiently supported, and the nuclear deterrent in the United States can continue to be safe, secure, and reliable. Under DOE’s long-standing model of having unique management and operating (M&O) contractors at each site, management of its sites has historically been decentralized and, thus, fragmented. Since the Manhattan Project produced the first atomic bomb during World War II, NNSA, DOE, and their predecessor agencies have depended on the expertise of private firms, universities, and others to carry out research and development work and efficiently operate the facilities necessary for the nation’s nuclear defense. DOE’s relationship with these entities has been formalized over the years through its M&O contracts—agreements that give DOE’s contractors unique responsibility to carry out major portions of DOE’s missions and apply their scientific, technical, and management expertise. Currently, DOE spends 90 percent of its annual budget on M&O contracts, making it the largest non-Department of Defense contracting agency in the government. The M&O contractors at DOE’s NNSA sites have operated under DOE’s direction and oversight but largely independently of one another. Various headquarters and field-based organizations within DOE and NNSA develop policies, and NNSA site offices, collocated with NNSA’s sites, conduct day-to-day oversight of the M&O contractors and evaluate the contractors’ performance in carrying out the sites’ missions. NNSA focused considerable attention on reorganizing its internal operations; however, it and DOE have struggled with establishing how NNSA should operate as a separately organized agency within the department. Several factors contributed to this situation. First, DOE and NNSA did not have a useful model to follow for establishing a separately organized agency in DOE. The President’s Foreign Intelligence Advisory Board’s June 1999 report suggested several federal agencies, such as the National Oceanic and Atmospheric Administration in the Department of Commerce, which could be used as a model for NNSA. However, as we reported in January 2007, none of the agency officials we interviewed considered their agency to be separately organized or believed that their agency’s operational methods were transferable to NNSA. Second, DOE’s January 2000 implementation plan, which was required by the NNSA Act, did not define how NNSA would operate as a separately organized agency within DOE. Instead, reflecting the opposition of the then DOE senior leadership to the creation of NNSA, the implementation plan “dual-hatted” virtually every significant statutory position in NNSA with DOE officials (i.e., having DOE officials contemporaneously serve in NNSA and DOE positions), including the Director of NNSA’s Office of Defense Nuclear Counterintelligence and General Counsel. As we testified in April 2001, this practice caused considerable concern about NNSA’s ability to function with the independence envisioned in the NNSA Act. Dual-hatting was subsequently forbidden by an amendment to the NNSA Act. A lack of formal agreement between DOE and NNSA in a number of key areas—budgeting, procurement, information technology, management and administration, and safeguards and security—resulted in organizational conflicts that inhibited effective operations. Even where formal procedures were developed, interpersonal disagreements hindered effective cooperation. For example, our January 2007 report described the conflict between NNSA and DOE counterintelligence offices. Specifically, NNSA and DOE counterintelligence officials disagreed over (1) the scope and direction of the counterintelligence program, (2) their ability to jointly direct staff in the headquarters counterintelligence program offices, (3) the allocation of counterintelligence resources, (4) counterintelligence policy making and (5) their roles and responsibilities in handling specific counterintelligence matters. Subsequently, Congress amended the NNSA Act to consolidate the counterintelligence programs of DOE and NNSA under the Department of Energy. The Defense Science Board provides the Department of Defense with independent advice and recommendations on matters relating to the department’s scientific and technical enterprise See Defense Science Board Task Force, Nuclear Capabilities (Washington, D.C.: December 2006). organized status, maintains a costly set of distinctly separate overhead and indirect cost operations that often duplicate existing DOE functions. For example, NNSA retains separate functions in areas such as, among others, congressional affairs, general counsel, human resources, procurement and acquisition, and public affairs. According to this November 2011 report, these redundant operations are costly and can complicate communications and program execution. There have been continuing calls for removing NNSA from DOE and establishing it as a separate agency. We reported in January 2007 that former senior DOE and NNSA officials with whom we spoke generally did not favor removing NNSA from DOE; we concluded that such drastic change was unnecessary to produce an effective organization. Since its creation, NNSA has made considerable progress resolving some of its long-standing management deficiencies. For example, we reported in June 2004 that NNSA had better delineated lines of authority and improved communication between NNSA headquarters and its site offices. Furthermore, our January 2007 report contained 21 recommendations to the Secretary of Energy and the Administrator of NNSA that were intended to correct deficiencies in five areas— organization, security, project management, program management, and financial management. DOE and NNSA have taken important steps to address most of these recommendations. For example, to improve security, we recommended that the Administrator of NNSA, among other things, implement a professional development program for security staff to ensure the completion of needed training, develop a framework to evaluate results from security reviews and guide security improvements, and establish formal mechanisms for sharing and implementing lessons learned across the weapons complex. NNSA’s establishment of an effective headquarters security organization has made significant progress implementing these recommendations by performing security reviews, developing security performance measures, and instituting a security lessons-learned center. Nevertheless, NNSA continues to experience significant deficiencies, particularly in its management of major projects and contracts. As we testified in February 2012, a basic tenet of effective management is the ability to complete projects on time and within budget. However, for more than a decade, NNSA has continued to experience significant cost and schedule overruns on its major projects, principally because of ineffective oversight and poor contractor management. We have reported that NNSA’s efforts to extend the operational lives of nuclear weapons in the stockpile have experienced cost increases and schedule delays, such as a $300 million cost increase and 2-year delay in the refurbishment of the W87 nuclear warhead and a $70 million cost increase and 1-year delay in the refurbishment of the W76 nuclear warhead. Furthermore, we reported that the estimated cost to construct a modern Uranium Processing Facility at NNSA’s Y-12 National Security Complex experienced a nearly sevenfold cost increase from between $600 million and $1.1 billion in 2004 to between $4.2 billion and $6.5 billion in 2011. We also reported in March 2012 that NNSA’s project to construct a new plutonium research facility at Los Alamos National Laboratory—the Chemistry and Metallurgy Research Replacement Nuclear Facility— would cost between $3.7 billion and $5.8 billion—nearly a sixfold increase from NNSA’s original estimate. NNSA’s February 2012 decision to defer construction of this facility for at least 5 years will result in a total delay of between 8 and 12 years from its original plans. NNSA’s planning, programming, and budgeting process has also experienced a setback, which raises questions about the process’s capability and flexibility. Specifically, NNSA’s modernization and operations plans are detailed and annually updated in the agency’s Stockpile Stewardship and Management Plan (SSMP), which provides details of nuclear security enterprise modernization and operations plans over the next two decades. In addition, as discussed above, the NNSA Act requires NNSA to annually submit to Congress an FYNSP—a budget document approved by the Office of Management and Budget that details NNSA’s planned expenditures for the next 5 years. Furthermore, Section 1043 of the National Defense Authorization Act for Fiscal Year 2012 requires the Department of Defense and NNSA to jointly produce an annual report that, among other things, provides a detailed 10-year estimate of modernization budget requirements. NNSA neither submitted an FYNSP based on “programmatic requirements” nor the Section 1043 annual report with its fiscal year 2013 budget submission. In addition, NNSA has yet to release an updated SSMP. According to the Secretary of Energy, the August 2011 Budget Control Act created “new fiscal realities” that have caused the agency to revise its long-range modernization and operations plans and budget. An NNSA official told us that the revised plans, which will include the FYNSP, Section 1043 annual report, and updated SSMP should be completed in July 2012. We are currently reviewing NNSA’s planning, programming, and budgeting process in response to a request from the Subcommittee on Energy and Water Development, Senate Committee on Appropriations, and we expect to issue a report on this work in the next few months. In conclusion, producing a well-organized and effective agency out of what was widely considered a dysfunctional enterprise has been a considerable challenge. In some areas, NNSA can be viewed as a success. In particular, NNSA has successfully ensured that the nuclear weapons stockpile remains safe and reliable in the absence of underground nuclear testing, accomplishing this complicated task by using state-of-the-art facilities, as well as the skills of top scientists. As we testified in February 2012, maintaining government-owned facilities that were constructed more than 50 years ago and ensuring M&O contractors are sustaining critical human capital skills that are highly technical in nature and limited in supply are both difficult undertakings. Careful federal oversight over the tens of billions of dollars NNSA proposes to spend to modernize nuclear facilities will be necessary to ensure these funds are spent in as an effective and efficient manner as possible, especially given NNSA’s record of weak management of its major projects. Over the past decade, we have made numerous recommendations to DOE and NNSA to improve their management and oversight practices. DOE and NNSA have acted on many of these recommendations and have made considerable progress. Nevertheless, enough significant management problems remain that prompt some to call for removing NNSA from DOE and either moving it to another department or establishing it as a separate agency. As we concluded in January 2007, however, we do not believe that such drastic changes are necessary, and we continue to hold this view today. Importantly, we are uncertain whether such significant organizational changes to increase NNSA’s independence would produce the desired effect of creating a modern, responsive, effective, and efficient nuclear security enterprise. In light of the substantial leadership commitment to reform made by senior DOE and NNSA officials, and the significant improvements that have already been made, we believe that NNSA remains capable of delivering the management improvements necessary to be an effective organization, and we will continue to monitor NNSA’s progress making these improvements. Chairman Turner, Ranking Member Sanchez, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions you may have at this time. If you or your staff members have any questions about this testimony, please contact me at (202) 512-3841 or aloisee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. GAO staff who made key contributions to this testimony are Allison Bawden, Ryan T. Coles, Jonathan Gill, and Kiki Theodoropoulos, Assistant Directors, and Patrick Bernard, Senior Analyst. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
During the late 1990s, DOE had difficulties with a lack of clear management authority and responsibility that contributed to security problems at the nation’s nuclear weapons laboratories and management problems with major projects. In response, Congress created NNSA as a separately organized agency within DOE under the NNSA Act. NNSA is responsible for managing nuclear weapon- and nonproliferation-related national security activities in laboratories and other facilities, collectively known as the nuclear security enterprise. GAO continues to identify problems across the nuclear security enterprise, from projects’ cost and schedule overruns to inadequate oversight of safety and security at NNSA’s sites. With NNSA proposing to spend tens of billions of dollars to modernize its facilities, it is important to ensure scarce resources are spent in an effective and efficient manner. This testimony addresses (1) NNSA’s early experiences organizing and operating as a separately organized agency within DOE and (2) NNSA’s efforts to correct long-standing management deficiencies. It is based on prior GAO reports issued from January 1995 to March 2012. DOE and NNSA continue to act on the numerous recommendations GAO has made to improve NNSA’s management. GAO will continue to monitor DOE’s and NNSA’s implementation of these recommendations. After the enactment of Title 32 of the National Defense Authorization Act for Fiscal Year 2000 (NNSA Act), the Department of Energy (DOE) and the National Nuclear Security Administration (NNSA) struggled to determine how NNSA should operate as a separately organized agency within the department. A number of factors contributed to this. First, DOE and NNSA did not have a useful model to follow for establishing a separately organized agency in DOE. Several federal agencies were suggested as models, such as the National Oceanic and Atmospheric Administration in the Department of Commerce. However, GAO reported in January 2007 that agency officials GAO interviewed did not consider their agency to be separately organized or believed that their agency’s operational methods were transferable to NNSA. Second, DOE’s January 2000 plan to implement the NNSA Act did not define how NNSA would operate as a separately organized agency within DOE. Internal DOE opposition to the creation of NNSA led the department to fill virtually every significant statutory position in NNSA with DOE officials (i.e., having DOE officials contemporaneously serve in NNSA and DOE positions). As GAO testified in April 2001, this practice of “dual-hatting” caused considerable concern about NNSA’s ability to independently function. Also, lack of formal agreement between DOE and NNSA in a number of key areas such as, among others, budgeting and procurement, led to organizational conflicts that inhibited effective operations. Even where formal procedures were developed, interpersonal disagreements hindered effective cooperation. For example, a January 2007 GAO report described the conflict between NNSA and DOE counterintelligence offices, which led to Congress subsequently amending the NNSA Act to consolidate the counterintelligence programs of DOE and NNSA under DOE. NNSA has made considerable progress resolving some of its long-standing management deficiencies, but significant improvement is still needed especially in NNSA’s management of its major projects and contracts. GAO reported in June 2004 that NNSA has better delineated lines of authority and has improved communication between its headquarters and site offices. In addition, NNSA’s establishment of an effective headquarters security organization has made significant progress resolving many of the security weaknesses GAO has identified. Nevertheless, NNSA continues to experience major cost and schedule overruns on its projects, such as research and production facilities and nuclear weapons refurbishments, principally because of ineffective oversight and poor contractor management. In some areas, NNSA can be viewed as a success. Importantly, NNSA has continued to ensure that the nuclear weapons stockpile remains safe and reliable in the absence of underground nuclear testing. At the same time, NNSA’s struggles in defining itself as a separately organized agency within DOE, and the considerable management problems that remain have led to calls in Congress and other organizations to increase NNSA’s independence from DOE. However, senior DOE and NNSA officials have committed to continuing reform, and DOE’s and NNSA’s efforts have led to some management improvements. As a result, GAO continues to believe, as it concluded in its January 2007 report, that drastic organizational change to increase independence is unnecessary and questions whether such change would solve the agency’s remaining management problems.
This section describes nuclear fuel production and uranium enrichment, DOE’s and USEC’s involvement in uranium enrichment, and cleanup of uranium enrichment plants. Uranium enrichment is the process of raising the concentration of uranium-235, which is the isotope of uranium that undergoes fission to release enormous amounts of energy. Uranium is categorized by its concentration of uranium-235, expressed as a percentage of weight or “assay” level. DOE categorizes uranium in five general types, each of which is characterized by a different assay level and has different uses (see table 1). Uranium undergoes a number of processing steps to produce LEU nuclear fuel, beginning with the mining of uranium ore and ending with the fabrication of LEU fuel for nuclear reactors (see fig. 1). The uranium enrichment stage falls approximately in the middle of the nuclear fuel cycle. As can be seen in figure 1, the enrichment process results in two principal products: (1) enriched uranium hexafluoride and (2) leftover “tails” of uranium hexafluoride. These tails are also known as depleted uranium because the material is depleted in uranium-235 compared with natural uranium. Tails are generally considered an environmental liability. The Nuclear Regulatory Commission (NRC) requires uranium enrichment facility operators to provide financial assurance that funds will be available when needed for the disposition of depleted uranium. To meet these NRC requirements, USEC has used surety bonds—which guarantee payment for the tails disposition costs by a third party, among other things, in the event that USEC defaults on such obligations—to guarantee the disposition of its depleted uranium and stored wastes. LEU resulting from the enrichment process is valued based on two components: (1) the value of the feed component, which is generally natural uranium in the form of uranium hexafluoride, and (2) the value of the enrichment component, or separative work units (SWU), which is the industry standard for the measure of effort needed to transform a given amount of natural uranium into LEU. According to DOE, the United States needs an assured source of tritium to maintain the U.S. nuclear weapons stockpile. In October 2014, we reported on DOE’s practice of using only unobligated LEU to meet national security needs for tritium. To produce tritium, DOE has stated that it can only use unobligated LEU. LEU is considered to be unobligated when neither the uranium nor the technology used to enrich it carries an “obligation” from a foreign country regarding its use, such as a requirement that the material only be used for peaceful purposes. These obligations are contained in international agreements to which the United States is a party. In the 1940s, DOE and its predecessor agencies began operating government-owned uranium enrichment plants first to meet national security needs for enriched uranium and later for use as fuel in commercial nuclear reactors. In 1992, United States Enrichment Corporation was established as a government corporation to, among other things, provide uranium enrichment services for the U.S. government and utilities that operate nuclear power plants and to take over operations of DOE’s two GDPs in Portsmouth, Ohio, and Paducah, Kentucky. Then, in 1996, the USEC Privatization Act authorized the government corporation’s sale to the private sector. Two years later, the government corporation was privatized through an initial public offering on July 28, 1998, which resulted in proceeds to the U.S. government of nearly $1.9 billion. Through privatization, United States Enrichment Corporation became a subsidiary of the new private company USEC Inc. USEC Inc. then changed its name to Centrus Energy Corp after it emerged from bankruptcy in September 2014. Today, United States Enrichment Corporation continues to be a subsidiary of Centrus. The Energy Policy Act of 1992 required the President to transfer to United States Enrichment Corporation, at its request, any intellectual and physical property related to a type of next-generation uranium enrichment technology called atomic vapor laser isotope separation (AVLIS). In 1973, Lawrence Livermore National Laboratory began conducting research on AVLIS—a technology that uses laser light to separate from natural uranium the specific uranium atoms needed to sustain nuclear reactions. Prior to transferring the technology to United States Enrichment Corporation in 1995 for further research and development and for eventual commercialization, DOE spent more than $1.7 billion developing the technology, which, according to USEC, was expected to use significantly less electricity than gaseous diffusion technology. In June 1999, USEC announced that it was suspending further development on AVLIS technology—on which it had spent over $100 million since the company was privatized—and would instead focus on developing other commercially viable enrichment technologies. According to USEC’s 1999 Annual Report, USEC determined that the returns from AVLIS would not be sufficient to outweigh the risks and costs of further development, and centrifuge technology was a well-established enrichment process. In 2002, DOE and USEC signed an agreement that committed USEC to pursue the development of gas centrifuge technology. This technology, which is now known as American Centrifuge, is based on gas centrifuge technology originally developed by DOE from the 1960s to the 1980s, after which DOE suspended development, in part due to budget constraints. According to USEC documents, the American Centrifuge technology would be significantly less energy intensive and more cost- efficient than the gaseous diffusion process used in the Portsmouth and Paducah GDPs. Subsequently, in 2004, USEC announced its selection of the Portsmouth plant as the future home of the American Centrifuge Plant—the facility where the American Centrifuge technology would be deployed—and received a license to operate the plant from NRC in 2007. DOE and USEC signed a cooperative agreement in 2012 to share the cost of supporting a research, development, and demonstration program for the American Centrifuge technology. According to USEC, the program ended in April 2014 and achieved all of its technical milestones on time and within budget. In May 2014, USEC and UT-Battelle—the management and operating contractor of DOE’s Oak Ridge National Laboratory—signed an agreement to maintain the capability of the American Centrifuge technology. In accordance with the USEC Privatization Act, the government is responsible for all costs incurred by the uranium enrichment program before July 1, 1993, when United States Enrichment Corporation began operating the two GDPs. Due to decreased demand for enrichment services and high costs of operating the GDPs, USEC ceased enrichment operations at the Portsmouth GDP in 2001 and at the Paducah GDP in 2013. These plants, as well as the Oak Ridge GDP (now known as the East Tennessee Technology Park), which was never operated by USEC, are contaminated with hazardous industrial, chemical, nuclear, and radiological materials. Cleanup activities, known as decontamination and decommissioning, include assessing and treating groundwater or soil contamination, disposing of contaminated materials, and making general repairs to keep the plants in a safe condition until they can be fully demolished. According to DOE’s 2010 Uranium Enrichment Decontamination and Decommissioning Report, the decontamination and decommissioning of the GDPs will cost billions of dollars and span several decades. DOE is decontaminating and decommissioning the three GDPs in the following phased approach: Oak Ridge GDP: DOE began decontaminating and decommissioning its Oak Ridge GDP in 1994 and estimates that it will be completed in 2024. Portsmouth GDP: DOE began decontaminating and decommissioning its Portsmouth GDP in 2009, announcing that it had contracted with USEC for accelerated environmental cleanup work to prepare the facility for decontamination and decommissioning. In August 2010, DOE entered into a new contract with another contractor (Fluor-B&W Portsmouth LLC) to decontaminate and decommission the former facilities at Portsmouth. According to a March 2014 DOE Office of Inspector General report, the decontamination and decommissioning work at the Portsmouth GDP is currently estimated to extend until 2044. Paducah GDP: DOE has not yet started decontaminating and decommissioning its Paducah GDP. After ceasing enrichment activities in May 2013, Centrus returned full control of the Paducah GDP to DOE in late October 2014. In July 2014, DOE contracted with Fluor Federal Services, Inc., to conduct activities to prepare the facility for eventual decontamination and decommissioning. According to a March 2014 DOE Office of Inspector General report, the decontamination and decommissioning work at the Paducah GDP is currently estimated to extend until 2044. However, according to DOE officials, the department is currently evaluating the projected lifecycle cost and schedule estimates for the Paducah cleanup completion. Since USEC was privatized in 1998 through June 1, 2015, DOE and USEC have engaged in 23 transactions (see app. II for a detailed description of the 23 transactions). Based on our analysis of documents and interviews with DOE officials, we grouped these transactions into the following six broad categories: Establishment of USEC. DOE and USEC engaged in 3 transactions to help establish the company as a private company. For example, DOE transferred enriched uranium to USEC, as required by the USEC Privatization Act, from 1998 to 2003. These transfers established value for USEC in the marketplace. In addition, beginning in 1998, DOE agreed to provide employment transition services to USEC for employees affected by restructuring activities that occurred at the Portsmouth and Paducah GDPs as a result of USEC’s privatization. National security. DOE and USEC engaged in 6 transactions for national security purposes. Specifically, DOE engaged in one transaction in 2012 to secure unobligated LEU from USEC to meet national security needs for the production of tritium for up to 18 months, and DOE engaged in a second transaction later in 2012 to secure unobligated LEU from USEC to meet national security needs for the production of tritium for up to 15 years. The other 4 transactions in this category supported the research and development of the American Centrifuge technology to meet long-term national security needs for unobligated LEU, such as for tritium production. For example, in 2010, DOE and USEC signed a cooperative agreement to share the cost of USEC’s development and demonstration of the American Centrifuge technology for a year. To provide its share of the cost, DOE took title to and financial responsibility for the disposal of depleted uranium tails from USEC. Facilities management. DOE and USEC engaged in 5 transactions regarding the operation and management of various facilities, including the Portsmouth and Paducah GDPs, as well as other facilities associated with the development of the American Centrifuge technology. For example, in one transaction, DOE signed a lease agreement with United States Enrichment Corporation in 1993—when it became a government corporation—and the lease was transferred to the private corporation when the company was privatized. The agreement included USEC’s lease of the Portsmouth and Paducah GDPs, as well as an electric power agreement and an agreement between DOE and USEC to provide certain services for each other related to the use of the GDPs. In another transaction, after USEC ceased enrichment activities at the Portsmouth GDP, DOE contracted with USEC from 2001 through 2011 for several activities associated with maintaining the facility in a dormant condition and preparing the facility for decontamination and decommissioning. Nuclear materials management and security. DOE and USEC engaged in 3 transactions to support the management and security of nuclear materials. In one transaction beginning in 1999, DOE agreed to pay USEC to provide safeguards and security services for HEU that DOE stored at the Portsmouth GDP. In another transaction beginning in 1999, USEC contracted with DOE for the storage of enriched uranium that exceeded the amount of material USEC could possess in its facilities under NRC limits. In the third transaction, from 2005 through 2008, DOE contracted with a USEC subsidiary to manage the U.S. government’s nuclear materials tracking system, called the Nuclear Materials Management and Safeguards System. Issues from prior transactions. DOE and USEC engaged in 3 transactions to address issues with previous transfers of uranium when DOE had inadvertently provided USEC with uranium that did not conform to industry standards or more uranium than originally agreed on by the parties. For example, in March 2000, USEC discovered that uranium that it had received from DOE prior to privatization was contaminated with technetium, a radioactive metal that is considered a contaminant by commercial specifications for nuclear fuel. In a 7- year transaction that began in 2002, DOE (1) contracted with USEC to clean up some of the contaminated uranium, (2) provided replacement uranium and monetary payment to USEC, and (3) took title to some of USEC’s depleted uranium. In a second transaction, in 2003, DOE transferred HEU to USEC to replace other material that DOE transferred to USEC prior to privatization that did not conform to industry standards. In a third transaction, DOE and USEC addressed the fact that they had underestimated the amount of material stored in certain HEU cylinders that DOE had transferred to USEC prior to privatization. Specifically, DOE had transferred to USEC about 0.8 metric tons of HEU more than initially agreed on. To address this issue, in 1998, USEC agreed to pay DOE about $35 million more than originally agreed on by the parties. Other. DOE and USEC engaged in 3 other transactions since 1998. One transaction—which occurred from 2005 through 2006 and involved DOE, USEC, and a third party—was intended to determine the feasibility and benefits of re-enriching a portion of DOE’s depleted uranium inventory for potential use as nuclear fuel in a commercial reactor. In the other two transactions, USEC and its subsidiaries paid a fee for access to DOE restricted data related to the centrifuge technology. Access to this data allowed USEC to utilize DOE centrifuge technology in the development and design of the American Centrifuge technology. See appendix III for a table of the 23 transactions organized by category. Figure 2 shows how the transactions were distributed over the 17-year period that we reviewed. Our analysis shows that the general nature of the transactions evolved over time. Immediately following USEC’s privatization, the majority of the transactions were of the establishment of USEC category. In the middle part of the 17-year period, most of the transactions were of the facilities management and nuclear materials management and security categories. In recent years, the majority of the transactions were of the national security category. DOE and USEC have been continuously involved in transactions since 1998. Of the 23 transactions, at least 6 have spanned a decade or longer, while the other transactions were of shorter duration. In addition to the transactions described above, there were at least three other significant arrangements involving DOE and USEC, which were noteworthy because, in each case, DOE or USEC received something of value as part of the arrangement, even though the arrangement did not meet our definition of a transaction. These arrangements were as follows: Before it was privatized, the U.S. government selected United States Enrichment Corporation as the U.S. government’s executive agent for the HEU Purchase Agreement—a 1993 nuclear arms reduction agreement between the United States and Russia. USEC continued its role as sole executive agent after its privatization, and activities under the agreement continued through 2013. Under the agreement, United States Enrichment Corporation, and later USEC, purchased LEU from the Russian government’s executive agent, which had produced it by downblending HEU taken from dismantled Soviet-era nuclear warheads. Centrus officials told us that USEC used its large backlog of contracts with commercial utilities to place the LEU in the market. According to Centrus officials, this agreement provided a significant source of supply of LEU to USEC over a 20-year period and resulted in the destruction of the equivalent of 20,000 nuclear warheads. We did not identify any exchange of funds between DOE and USEC related to USEC’s service as the executive agent. In a December 2006 agreement, DOE granted USEC a nonexclusive patent license for the use or manufacture of the American Centrifuge technology. In this 2006 agreement, USEC agreed to pay DOE a royalty for the use of the American Centrifuge technology. According to DOE and Centrus officials, DOE has never received royalties from USEC or Centrus under this license. According to Centrus officials, the company has not made any payments because it has not yet commercialized the American Centrifuge Plant or sold any material produced by the centrifuge technology. In 2012, USEC granted to DOE (1) an irrevocable, nonexclusive, royalty-free license, for use by or on behalf of the United States, in all centrifuge intellectual property for government purposes and (2) an irrevocable, nonexclusive license in all centrifuge intellectual property, with the right to sublicense to other parties, for commercial purposes. This arrangement was made at a time when there was uncertainty surrounding the future of the American Centrifuge technology. According to Centrus officials, USEC has transferred title to DOE for more than 30 existing centrifuges, built at USEC’s expense, as well as all new machines built during the research, development, and demonstration program. DOE identified various monetary and nonmonetary costs and benefits of the 23 transactions. For most transactions that occurred since 2005, DOE officials provided us with information through documents and interviews about the costs and benefits of each transaction. However, for transactions occurring prior to 2005, DOE officials were not always able to provide definitive information about the costs and benefits of the transactions independent of that which was stated in the transactional documents. For transactions occurring after 2005—which mostly fell into the national security category—the costs DOE identified were incurred through the transfer of appropriated funds to USEC, transfer of various types of uranium, and acceptance of responsibility for the future disposition of depleted uranium tails. The benefits DOE identified were both monetary (i.e., payments or a reduction in obligations for the disposal of depleted uranium) and nonmonetary (e.g., LEU, national security benefits such as the development of the American Centrifuge technology). For transactions prior to 2005, DOE officials were not always able to provide definitive information on the costs and benefits to DOE independent of that which was stated in the transactional documents. In some cases, for example, DOE officials told us that key officials familiar with the transactions had since retired or were deceased, and therefore information on the costs and benefits of these transactions was not available. In addition, DOE officials told us that the department changed accounting systems in 2004, and therefore the officials could not always access definitive cost and benefit information prior to 2005. For example, DOE officials provided us with information on USEC’s payments to DOE for the lease of the Portsmouth and Paducah GDPs from 2005 to 2014, but they could not provide us with information on USEC’s payments prior to 2005. We provided a draft of this report for comment to the Secretary of Energy on July 29, 2015. DOE provided technical comments that were incorporated, as appropriate. We also provided a technical statement of facts to Centrus Energy Corp. We received technical comments from Centrus and incorporated them, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Energy, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. The objectives of our review were to (1) identify transactions involving the Department of Energy (DOE) and USEC Inc. (USEC, now known as Centrus Energy Corp.) since USEC was privatized in 1998 and (2) describe the costs and benefits, if any, of these transactions to DOE, as identified by DOE. For the purpose of our review, we define a transaction as a contract or agreement providing for an exchange of funds, uranium of any type, or services between or involving DOE and USEC. We included in our scope any transactions that occurred between USEC’s privatization on July 28, 1998, and present (July 1, 2015), as well as transactions that commenced before July 28, 1998, but that continued to be executed after USEC was privatized. We excluded interactions involving DOE and USEC if no exchange of monetary payment, uranium, or services occurred. To conduct this work, we reviewed and analyzed documents identifying these transactions and collected information regarding the type, purpose, costs, and benefits of the transactions. These documents include annual DOE budget justification materials for fiscal years 1999 through 2016, USEC/Centrus Energy Corp.’s annual reports and corporate filings with the U.S. Securities and Exchange Commission from 1998 through 2015, contracts and agreements between DOE and USEC, and prior GAO reports. Once we identified a preliminary list of transactions involving DOE and USEC, we asked DOE to review the list. DOE officials amended the list and provided documentation for additional transactions to include. Based on our analysis of DOE documents, and through interviews with DOE officials, we added and consolidated certain transactions and removed others that were inconsistent with our definition of a transaction. We ultimately developed a final list of 23 transactions. We also interviewed Centrus Energy Corp. officials and provided an opportunity to review and confirm the final list of transactions to ensure that the list was comprehensive and accurate, and they concurred with the list. We then provided DOE with a standard set of questions regarding the purpose, costs, and benefits of each of the transactions in the list. In two cases, DOE was able to fully complete the standard set of questions. For the other transactions, DOE officials told us that documentation was not fully available to answer the standard question sets for reasons we discuss in the report. Instead, we conducted interviews with DOE officials to collect information that they did know about each transaction, and we reviewed available DOE and USEC documentation to obtain additional information on the costs and benefits of each transaction. See appendix IV for an example of the standard set of questions we provided to DOE officials on each transaction. For the purpose of this review, in cases where data were available, we are reporting DOE-identified costs and benefits of each transaction. To assess the reliability of the costs and benefits that DOE identified for each transaction, we reviewed documents to corroborate DOE-identified costs and benefits. Such documents included contracts, memorandums of agreement, lease agreements, and summary information from DOE/NRC Form 741. Based on these steps, we determined that the information we are reporting on DOE-identified costs and benefits is sufficiently reliable for the purposes of this review. We conducted this performance audit from November 2014 to September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Energy Policy Act of 1992 directed the newly created United States Enrichment Corporation to lease DOE’s two gaseous diffusion plants (GDP) in Ohio and Kentucky. On July 1, 1993, DOE and United States Enrichment Corporation entered into an initial 6-year lease for the GDPs. When USEC was privatized in 1998, the lease was transferred to the private corporation and eventually renewed through July 1, 2016. However, USEC returned both GDPs to DOE prior to 2016. Portsmouth GDP: On December 23, 2010, USEC notified DOE of its intent to return the leased areas of the Portsmouth GDP to DOE. After ceasing uranium enrichment operations in 2001, USEC maintained the Portsmouth plant in cold standby at DOE’s request and subsequently cold shutdown status until 2011. Paducah GDP: On August 1, 2013, USEC notified DOE of its intent to return the leased areas of the Paducah GDP to DOE on October 21, 2014. DOE and USEC were involved in a 10-year transaction related to the closure of the Portsmouth GDP. Activities related to the closure were performed under one contract and represented two phases at the Portsmouth GDP: (1) cold standby and (2) cold shutdown. Cold standby: In June 2000, USEC announced its decision to cease uranium enrichment operations at the Portsmouth GDP in June 2001. On March 1, 2001, the Secretary of Energy announced that DOE would place the Portsmouth GDP in cold standby mode—a dormant condition that would allow operations to be resumed within 18 to 24 months if needed. In August 2001, DOE and USEC signed an agreement for USEC to provide certain services, including those necessary for maintaining the GDP in cold standby mode. Specifically, beginning in 2001, USEC provided a number of services for DOE related to cold standby, including winterization and removal of deposits of uranium hexafluoride from equipment. Cold shutdown: In 2006, DOE and USEC modified the Portsmouth GDP cold standby contract to begin transitioning the GDP to cold shutdown mode. Cold shutdown mode involved work to maintain and prepare the GDP for eventual decontamination and decommissioning. Under this transaction, which spanned 7 years, DOE and USEC contracted for USEC to clean up contaminated uranium; in exchange, DOE provided replacement uranium and payments to USEC and also took title to some of USEC’s depleted uranium. Specifically, in early 2001, USEC notified DOE that up to 9,550 metric tons of about 45,000 metric tons of natural uranium that it had received from DOE prior to privatization was contaminated with technetium—a radioactive metal that is produced as a by-product of fission in a nuclear reactor—at levels exceeding the commercial specification for nuclear fuel. After USEC notified DOE of its contaminated uranium, DOE determined that about 5,517 metric tons of uranium in DOE’s inventory was also contaminated with technetium. According to USEC, replacing the 9,550 metric tons of contaminated uranium would have cost USEC approximately $238 million in 2001. USEC requested that DOE replace USEC’s contaminated uranium with clean uranium from DOE’s inventory. DOE did not admit legal liability for compensating USEC for the contaminated uranium. In addition, according to DOE officials, DOE did not have enough available clean uranium in its excess uranium inventory to replace all of USEC’s contaminated uranium. However, starting in 2002, DOE and USEC signed a series of agreements to decontaminate or replace USEC’s contaminated inventory (see fig. 3 for a summary of the uranium decontamination process). In June 2002, DOE and USEC agreed that, among other things, USEC would process some of the contaminated uranium at the Portsmouth plant for 15 months to remove the technetium. USEC would initially pay about half of the costs associated with decontamination, and DOE would compensate USEC by taking title to some of USEC’s depleted uranium, reducing USEC’s costs for eventual disposal of this material. As part of the June 2002 agreement, USEC agreed to formally release DOE from any potential claims of liability as USEC decontaminated the uranium. USEC decontaminated about 2,900 metric tons of uranium under this agreement. DOE and USEC signed two subsequent agreements in September and November 2003 that extended USEC’s decontamination work through December 2003. In 2004, DOE and USEC signed additional agreements for USEC to decontaminate uranium. Specifically, under an April 2004 work authorization, DOE paid USEC using appropriated funds for decontamination work conducted from December 2003 to December 2004. USEC decontaminated about 2,050 metric tons during this time. In October 2004, DOE replaced 2,116 metric tons of USEC’s contaminated uranium with the same amount of uncontaminated uranium. Two months later, in December 2004, USEC agreed to decontaminate an additional amount of contaminated uranium. In June 2006, we reported that DOE had provided USEC about 1,100 metric tons of uncontaminated uranium, which USEC sold on the commercial market for $84.4 million. In addition, in April 2006, DOE sold uranium to obtain funding to compensate USEC for decontamination services that were expected to last from July 2006 through November 2006. According to DOE officials, uranium cleanup activities continued through 2009. In 2005, DOE’s Office of Environmental Management, the Bonneville Power Administration, Energy Northwest, and USEC executed a series of agreements to carry out a pilot project to determine whether a portion of DOE’s depleted uranium inventory could be used to produce nuclear fuel for Energy Northwest’s Columbia Generating Station, a nuclear power reactor near Richland, Washington, the generating capacity of which Bonneville Power Administration had purchased. The depleted uranium tails would be re-enriched and used instead of natural uranium- based feed to produce LEU for the Columbia Generating Station. In March 2012, USEC’s financial condition was weakening and, according to DOE officials, USEC was struggling to support the development of the American Centrifuge technology. DOE requested authority to transfer $150 million from existing funds in fiscal year 2012 to support USEC’s development of the American Centrifuge technology, but Congress did not provide this authority. Subsequently, DOE entered into a transaction with USEC in March 2012, under which it accepted title to 13,073 MTU of low-assay tails, along with the responsibility for their disposal, from USEC. This enabled USEC to free up $44 million in previously encumbered funds that were being used as collateral for surety bonds to satisfy NRC’s financial assurance requirements for the tails’ future disposal. In the wake of USEC’s bankruptcy filing in April 2014, the Secretary of Energy tasked its Oak Ridge National Laboratory with maintaining the operability of the American Centrifuge technology. As operator of Oak Ridge National Laboratory, UT-Battelle signed an agreement—called the “Domestic Uranium Enrichment – Centrifuge Information and Analysis” agreement—with USEC on May 1, 2014, to maintain the capability of and, where possible, advance the American Centrifuge technology in furtherance of DOE’s national security objectives. According to Oak Ridge officials, this agreement provides for the collection of data and provides reports related to the cascade operations and research and development activities. As of January 23, 2015, UT-Battelle had provided USEC $64.5 million in funding. These costs are funded by DOE through UT-Battelle’s contract with DOE. Appendix III: Department of Energy Transactions Involving USEC Inc. or Centrus Energy Corp. by Category Category Establishment of USEC Inc. (USEC) In addition to the individual named above, Allison B. Bawden (Assistant Director), Eric Bachhuber, Antoinette Capaccio, Amanda K. Kolling, and Karen Villafana made key contributions to this report. Also contributing to this report were Doreen Eng, Ellen Fried, Risto Laboski, Mehrzad Nadji, Alison O’Neill, Dan C. Royer, and Rebecca Shea.
DOE has had a long and complex relationship with USEC Inc. and its successor, Centrus Energy Corp. Until 2013, USEC, a government corporation that was privatized in 1998, was the only company enriching uranium that, according to DOE, could meet DOE's LEU needs for tritium production. However, USEC ceased enrichment operations in May 2013, and the future of its planned next-generation American Centrifuge enrichment facility is uncertain. GAO has previously reported on financial and other transactions involving DOE and USEC, including transactions that involved the transfer of uranium. GAO was asked to report on the history of the financial relationship between DOE and USEC. This report (1) identifies transactions involving DOE and USEC since USEC was privatized and (2) describes the costs and benefits, if any, of these transactions to DOE, as identified by DOE. GAO defines a transaction as a contract or agreement providing for an exchange of monetary payments, uranium of any type, or services between or involving DOE and USEC occurring from USEC's privatization on July 28, 1998, through July 1, 2015. GAO analyzed key DOE and USEC documents and interviewed DOE and Centrus Energy Corp. officials. The Department of Energy (DOE) has engaged with USEC Inc. (USEC) in 23 transactions since USEC was privatized in 1998 through July 1, 2015. The 23 transactions fall into the following six categories: Establishment of USEC . DOE engaged with USEC in 3 transactions to help establish the company as a private company. For example, from 1998 to 2003, DOE transferred enriched uranium, as required by the USEC Privatization Act, to USEC to establish commercial value for USEC. National security . DOE engaged with USEC in 6 transactions for national security purposes. For example, DOE engaged in several transactions to secure domestic low-enriched uranium (LEU), used in nuclear reactors, for the production of tritium—a radioactive isotope of hydrogen used to enhance the power of nuclear weapons—and support the development of USEC's next-generation American Centrifuge uranium enrichment technology. Facilities management . DOE engaged with USEC in 5 transactions regarding the operation and management of various facilities. For example, after USEC ceased enrichment operations at the Portsmouth Gaseous Diffusion Plant (GDP)—which it leased from DOE—DOE contracted with USEC from 2001 to 2011 to maintain the facility in a dormant condition and prepare it for future decontamination and decommissioning. Nuclear materials management and security . DOE engaged with USEC in 3 transactions to support nuclear materials management. For instance, in a transaction beginning in 1999, DOE agreed to pay USEC to provide safeguards and security services for highly enriched uranium (HEU), which is used in nuclear weapons, that DOE stored at the Portsmouth GDP. Issues from prior transactions . DOE engaged with USEC in 3 transactions to address issues with previous transfers of uranium. For example, in 2003, DOE transferred HEU to USEC to replace previously transferred material that turned out to be contaminated and that did not conform to industry standards. Other . In 2 other transactions, USEC and its subsidiaries paid a fee for access to DOE restricted data related to the centrifuge technology. A third transaction involved a pilot project to determine the usability of certain uranium as nuclear fuel. DOE identified various monetary and nonmonetary costs and benefits of the 23 transactions. DOE was able to identify the costs and benefits for most transactions that have occurred since 2005. For these transactions, DOE incurred costs through the transfer of appropriated funds and various types of uranium, as well as acceptance of responsibility for the future disposition of certain uranium. The benefits DOE received include monetary payments, LEU, and nonmonetary national security benefits. For transactions that occurred or began occurring prior to 2005, DOE was not always able to provide definitive information on its costs and benefits, in part because the agency's accounting system changed in 2004, and agency officials were not able to access information on certain transactions occurring prior to that time. GAO is not making recommendations in this report. DOE reviewed a draft of this report and provided technical comments that GAO incorporated as appropriate.
The Protection and Advocacy system was established in 1975 and was most recently reauthorized in 2000 for 7 years. P&A activities on behalf of individuals with developmental disabilities include legal representation; information and referral services; training and technical assistance in self- advocacy; short-term assistance, mediation and negotiation assistance to obtain benefits and services such as medical care and housing, transportation, and education; representation in administrative appeals; and investigation of reports of abuse and neglect, sexual harassment, inappropriate seclusion and restraint, and other problems. The 57 P&As include 46 that are private, nonprofit agencies; the other 11 are state agencies. P&A staffing typically includes management, investigators, advocates, attorneys, and administrative staff. The P&A in one state we reviewed also contracted with another organization to conduct lawsuits on its behalf. ADD provides annual funding to P&As, the amount of which is determined by a formula that uses several measures, including state population weighted by relative per capita income in the state and a measure of the relative need for services by individuals with developmental disabilities. In fiscal year 2003, ADD funding for P&As was set at $36.3 million, a $1.3 million increase over fiscal year 2002. Funding amounts to states ranged from $345,429 to $2,978,192 for fiscal year 2003. For P&As in California, Maryland, and Pennsylvania, these amounts were $2,978,192, $468,934, and $1,388,495, respectively. P&As also may receive funding from other sources to serve individuals with developmental disabilities, including state and private funds. In addition, P&As often serve populations other than individuals with developmental disabilities and receive separate funding for that purpose. Although state developmental disabilities services agencies are primarily responsible for arranging for the provision of services and oversight of quality for services received by individuals with developmental disabilities, the DD Act authorizes P&As to play an important role in monitoring these services. The DD Act authorizes P&As to investigate allegations of abuse and neglect when reported or if there is probable cause to believe that incidents occurred and to pursue legal, administrative, and other appropriate remedies or approaches on behalf of individuals with developmental disabilities. The act grants P&As access to individuals with developmental disabilities and to their records, including reports prepared by agencies or staff on injuries or deaths. Under this authority, P&As typically undertake monitoring efforts to review the adequacy of services that individuals receive in institutions and in community settings and to examine state oversight of quality assurance and regulatory compliance for residential services providers. Many individuals with developmental disabilities for whom P&As advocate are eligible to receive publicly financed residential services through Medicaid, which is the largest source of funds for services for individuals with developmental disabilities. State developmental disabilities services agencies have primary responsibility for monitoring the quality of services provided to individuals with developmental disabilities, including those services funded by Medicaid. In 2002, Medicaid financed 77 percent ($26.8 billion) of the total $34.7 billion in total long-term care spending on individuals with developmental disabilities. Medicaid spending was about $10.9 billion for ICF/MR residents including those living in large institutions; about $12.9 billion for individuals with developmental disabilities receiving home and community-based services (HCBS) under Medicaid waivers; and an additional $2.9 billion for other services provided in community settings, such as personal care. Residential choices for individuals with developmental disabilities vary by state since states choose whether to offer these individuals services in ICF/MRs, which is an optional rather than a mandatory benefit in Medicaid, and whether to provide services in community settings through HCBS waivers. States may apply to the Centers for Medicare & Medicaid Services (CMS) for waivers under section 1915(c) of the Social Security Act to provide HCBS services as an alternative to institutional care in ICF/MRs and waive certain Medicaid requirements that would otherwise apply, such as statewideness, which requires that services be available throughout the state, and comparability, which requires that all services be available to all eligible individuals. For both the ICF/MR and waiver programs, protecting the health and welfare of Medicaid-covered individuals receiving services is a shared federal-state responsibility. Under the ICF/MR optional benefit program, states annually inspect institutions to ensure that they meet federal quality standards. Under Medicaid waivers, states must include assurances to CMS that necessary safeguards are in place to protect beneficiaries. In pursuing legal remedies on behalf of individuals with developmental disabilities, P&As have represented individuals as well as groups or classes of individuals in lawsuits. All such lawsuits are subject to rules of procedure that govern proceedings in the relevant court. Many of these cases take place in federal court, where the Federal Rules of Civil Procedure (FRCP) apply. FRCP Rule 23 establishes procedural requirements for class action lawsuits in federal district court, including the circumstances under which individuals must be notified of their inclusion in a class prior to class formation, referred to as certification by the court, and notified of proposed settlements of lawsuits on their behalf. The requirements vary depending upon whether the suit is for injunctive relief or monetary damages. Lawsuits for injunctive relief seek a court order requiring another party to do or refrain from doing a specified act. For suits seeking injunctive relief, the type of class action suit P&As generally bring, the rule does not require notification of individuals’ inclusion in a class prior to class formation. The rule does, however, require notification of class members at the time of proposed settlement. By contrast, for class action suits seeking monetary relief, the rule requires that individuals be notified of their inclusion in a class prior to its formation. Nationwide and for the three states reviewed, lawsuits related to deinstitutionalization on behalf of individuals with developmental disabilities constitute a small part of overall P&A activities. We identified 24 lawsuits nationwide that P&As filed, joined, or intervened in related to deinstitutionalization from 1975 through 2002. P&As filed or intervened in six of these suits in the three states we examined—California, Maryland, and Pennsylvania—during this same period. Three of the six suits were settled as class actions. The three other suits were intended but not settled as class action lawsuits. P&As in these three states reported that they used litigation of all types, including litigation related to deinstitutionalization, in 1.5 percent of client problems they addressed from fiscal years 1999 through 2001. National data sources indicate that, from 1975 through 2002, P&As filed, joined, or intervened in approximately 24 lawsuits related to deinstitutionalization on behalf of individuals with developmental disabilities. (See app. II.) Most but not all of these lawsuits were intended to be class actions against large public institutions for persons with mental retardation and other developmental disabilities. Moreover, P&As reported that, relative to other activities, they spent a small proportion of staff time on filing class action lawsuits on behalf of individuals with developmental disabilities. Nationally, P&As reported spending about 2 percent of their staff time for this purpose in 2001. From 1975 through 2002, P&As in the three states we reviewed filed or intervened in six lawsuits related to deinstitutionalization on behalf of individuals with developmental disabilities. (See table 1.) Of the six lawsuits, four were brought in federal court and two were brought in state court. Three of these suits were settled as class action lawsuits. The other three suits were intended as class actions but not certified as such by their respective courts. Of these three, one in Maryland was dismissed by mutual agreement of the parties, one in California was settled by a multiparty agreement, and another in California is pending. Although most of the suits were settled a number of years ago, the impact of the suits can be ongoing. For example, the Nelson v. Snider suit in Pennsylvania was settled in 1994 but was part of the impetus for closing the Embreeville Center in 1998. Complaints brought in these lawsuits included allegations of inappropriate care and treatment in state institutions, including abuse and neglect, and violations of constitutional due process rights as well as rights under the Rehabilitation Act of 1973 and the Americans with Disabilities Act. The three class action suits resulted in court-ordered settlements requiring state officials to take a variety of actions, including placing of individuals with developmental disabilities in community settings, downsizing or closing of state institutions, and establishing and overseeing of certain quality assurance standards. P&As in California, Maryland, and Pennsylvania used litigation infrequently to address client problems according to available data from fiscal years 1999 to 2001. In their annual reports to ADD, P&As in these states reported using litigation to address 272 client problems over the 3- year period, or about 1.5 percent of all problems addressed. (See table 2.) This included litigation on behalf of named plaintiffs in deinstitutionalization litigation, such as class action lawsuits, and other litigation, such as litigation filed on behalf of individuals. By contrast, P&As reported using other services to address 17,947 client problems, more than 98 percent of all problems addressed. These services include contacting state officials for individuals in need of services such as health care, negotiation and mediation help, technical assistance in self- advocacy, and representation at administrative hearings. P&As in the three states communicated with parents and guardians as required by federal rules in the lawsuits we reviewed. In the three cases settled as class actions, P&As provided notice to all class members at the time settlement was proposed to the court, as required by federal rules. Such notice was not required in the other three cases we reviewed, which were not class actions. Even though P&As provided the notice required by federal rules in the lawsuits we examined, representatives of some parent groups told us they believed that P&As should have communicated with parents and guardians before filing or intervening in these lawsuits and prior to class certification by the court. P&As in the three states reviewed indicated that they did not try to communicate with all individuals potentially affected by the six lawsuits, including parents and guardians, but did communicate with organizations representing some parents and guardians during these stages of the lawsuits. However, even if P&As had provided notification during the stages specified by the parents and guardians, under the applicable federal rule of civil procedure an individual has no explicit right to opt out of a class in this type of case. In the three class action lawsuits we reviewed, P&As complied with FRCP Rule 23, which requires communication with all class members prior to settlement. Two of these lawsuits were filed and settled in federal district court, where the FRCP applied directly, and one lawsuit was filed and settled in California superior court, where, under prevailing law at that time, the judge applied the FRCP. FRCP Rule 23 does not require notification of class members prior to class certification in lawsuits seeking injunctive relief, the type of lawsuits generally brought by P&As, although such notice is required in class action lawsuits seeking monetary damages. However, FRCP Rule 23 does require notification at the time of proposed settlement for all class action lawsuits—including those seeking injunctive relief. It specifies that such notice “shall be given to all members of the class in such manner as the court directs.” This notice guarantees that unnamed class members will receive notice of any proposed settlement and have an opportunity to register objections with the court, thereby assisting the court in determining whether the proposed settlement is fair, adequate, and reasonable. We confirmed that such notice was provided in each of the three cases. Such notice was not required in the other three cases we reviewed, which were not class action lawsuits. P&As’ communication before a settlement was proposed to the court was not as comprehensive as some parents desired in the lawsuits we reviewed. Representatives of some parent groups told us they were not satisfied with the extent of P&A communication because they believed that P&As should have communicated with parents and guardians in the six lawsuits we examined before filing or intervening in the suits and prior to class certification by the court. P&A officials in California, Maryland, and Pennsylvania told us that they did not try to communicate with all individuals, including parents and guardians, potentially affected by the six lawsuits until a settlement was proposed to the court. However, P&As were not required to provide such communication. In a discussion with NAPAS, the national organization representing P&As, an official told us that for P&As to attempt to contact all such individuals would require considerable time and expense, which would make providing such notice extremely difficult. Furthermore, he said that P&As would not generally wish to provide such notice unless required to do so because this could provide defendants with information they might use to oppose litigation. Nevertheless, P&A officials said that they met or attempted to meet with organizations representing some parents and guardians of affected individuals during the lawsuits. The context of the meetings varied with the circumstances of the six lawsuits. For example, a California P&A official indicated that, both before and after filing the Coffelt lawsuit in 1990, the P&A met with organizations representing the parents and guardians of residents of at least three of the institutions affected. In the other two California lawsuits, Richard S. (1997) and Capitol People First (2002), a California P&A official indicated that the P&A met with and represented organizations whose members included the families of institutional residents, and met with individual family members before and during the litigation. The P&A did not, however, meet with parent organizations specifically associated with the institutions. In both of those lawsuits, the organizations specifically associated with the institutions were or are involved as parties, thus complicating direct communication between the P&A and parents and guardians who might belong to these organizations. A Maryland P&A official told us that, before filing the Hunt v. Meszaros litigation in 1991, the P&A met with an organization representing parents and guardians of residents of the affected facility— the Great Oaks Center. A Pennsylvania P&A official told us that the P&A met with a parent group representing Embreeville Center residents during the Nelson v. Snider litigation (1994)—both before filing the lawsuit and after the court’s certification of a class action. These efforts were complicated by the fact that this organization had already filed another lawsuit against the state. A Pennsylvania P&A official said that the P&A tried unsuccessfully to meet with an organization representing parents and guardians of Western Center residents prior to filing the Richard C. v. Snider lawsuit (1989) and that such efforts were complicated by another lawsuit filed against the P&A by that organization. Representatives of some parent groups, however, told us that P&A communication concerning the lawsuits with parents and guardians of affected individuals was limited. Three of the six lawsuits we examined—Nelson v. Snider, Richard. C. v. Snider, and Coffelt v. California Department of Developmental Services—were certified by the courts as class actions. The P&As indicated that they did not attempt to notify all prospective class members prior to certification of their classes by the court for the reasons discussed above. P&As told us they maintained regular contact with all named plaintiffs in the lawsuits. Representatives of some parent groups said that parents and guardians of individuals affected as unnamed class members in the lawsuits had insufficient opportunity to express their views about the inclusion of their adult children in the class and were not notified that their children might be included until the settlement was proposed to the court. As a result, some individuals may have been included in class actions even though they or their parents or guardians opposed their inclusion. As a matter of law, however, these individuals would have had limited influence even if they had been able to express their views. In class action suits seeking injunctive relief, such as the three we examined, the court focuses on the circumstances of the class as a whole as opposed to those affecting individual members. In such suits, under the rules governing such litigation, an individual has no explicit right to opt out of a class as certified by the court. By contrast, there is an explicit right to opt out of a class in class action lawsuits that seek monetary compensation. P&As assumed various roles in monitoring the health and well-being of individuals with developmental disabilities transferred from institutions to community settings in four of five lawsuits we reviewed in California, Maryland, and Pennsylvania that had been resolved. (See table 3.) No P&A monitoring role has been established in the sixth suit we reviewed, in which litigation is ongoing. In these three states, P&A roles and responsibilities varied with the circumstances of the lawsuits and initiatives P&As undertook as part of their general role to protect and advocate the rights of individuals with developmental disabilities. State developmental disabilities services agencies, however, continue to have the primary responsibility for ensuring the health and well-being of individuals, including monitoring these individuals when they receive services in the community. Representatives of some parent groups told us that parents and guardians have been dissatisfied with the adequacy of P&As’ monitoring role in community placements, while representatives of other parent groups told us they generally supported the P&A monitoring role. With respect to the three lawsuits filed and settled as class actions, the settlement agreements did not specify a monitoring role for the P&As, but the P&As assumed specific roles in monitoring individuals transferred to the community. Regarding the other three lawsuits not settled as class actions, the P&A also undertook a role in monitoring affected individuals in one of these suits. P&As are not playing a monitoring role in the other two suits—in one because of the nature of the suit, and in the other because litigation is ongoing. For the three lawsuits settled as class actions—Coffelt (California), Richard C. (Pennsylvania), and Nelson (Pennsylvania)—the P&As assumed the role of monitoring some or all class members transferred to community settings. As a result of the Coffelt settlement in 1994, the California P&A has undertaken the role of monitoring individuals using information that the state was required to provide, such as annual reports about quality of life in community settings, based on consumer and family surveys. P&A monitoring responsibilities for Coffelt’s 11 named plaintiffs involved regular communication with these individuals. For Richard C., a Pennsylvania P&A official told us that the P&A role included hiring an advocate to monitor services provided to all class members while they were still living at the Western Center and after their placement in community settings. This advocate was expected to visit each class member discharged from the Western Center after 1994 at least once. A P&A official said that monitoring included face-to-face interaction with class members living at the Western Center or in the community. The P&A has ongoing responsibility for monitoring several individuals who were moved from the Western Center to the Ebensburg Center, another state facility for individuals with mental retardation. For the Nelson lawsuit settled in 1994, the P&A undertook the responsibility to follow 50 class members who did not have involved family members, in addition to monitoring six named plaintiffs. P&As have assumed a role in monitoring state development and implementation of quality assurance mechanisms established by all three settlement agreements to improve services provided in community settings and evaluate services delivered in the community. Thus, these agreements have long-lasting implications for state and P&A monitoring activities because implementation of the settlement agreements may take years to complete. Of the three other lawsuits we reviewed, one was settled, one was dismissed, and the third is ongoing litigation. In the settled suit, Richard S. (California), the P&A did not undertake a monitoring role as a result of this lawsuit. In this suit, the P&A intervention was intended to overturn California state policy permitting family member or guardian veto of community placement decisions, an outcome that did not lead to a P&A role in monitoring individuals affected by this suit. However, California P&A officials reported that the P&A had the role of monitoring the well- being of all individuals who moved from institutions to the community, including individuals affected by the Richard S. suit, based on the role assumed by the P&A in the Coffelt case. In the dismissed suit Hunt (Maryland), the P&A undertook a certain role to monitor plaintiffs and other affected individuals. The Hunt lawsuit was dismissed in 1999 following closure of the Great Oaks Center in 1996. However, the P&A and Arc of Maryland officials reported having a role in assisting families of individuals who had problems with community placements. Finally, California’s Capitol People First (filed in 2002) is in the early stages of litigation and has not yet addressed a P&A monitoring role. Parent groups we interviewed had differing views about the role P&As played in monitoring individuals in the five resolved lawsuits we reviewed. Representatives of some parent groups were generally dissatisfied with the adequacy of P&As’ efforts to monitor the health and well-being of individuals transferred to community settings, while representatives of other parent groups, who were generally in favor of these lawsuits, supported P&As’ monitoring approaches. Those parent groups that were dissatisfied said that in supporting states’ “rapid” deinstitutionalization efforts, P&As disregarded parents’ concerns about service quality deficiencies in community settings and the needs of individuals with severe developmental disabilities, who tend to be medically fragile. They also stated that P&A staff did not adequately monitor individuals who were moved to community settings. In contrast, representatives of other parent groups generally supported the P&A role in monitoring community placements. For example, a representative of one parent group said that the Maryland P&A collaborated with this group in developing a family guide to community programs for people affected by the Hunt lawsuit. Other parent groups said the Pennsylvania P&A was instrumental in establishing consumer and family satisfaction teams to monitor the quality of services provided to individuals and families affected by the Nelson lawsuit. We provided a draft of this report to ACF and to the California, Maryland, and Pennsylvania P&As for their review. ACF said it was a thorough analysis of the three P&As’ involvement in deinstitutionaliation lawsuits for the population examined. ACF’s written comments are in appendix III. The three P&As stated that the report is accurate, and provided technical comments. We incorporated technical comments as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the Assistant Secretary for Children and Families and the Commissioner of the Administration on Developmental Disabilities in the Department of Health and Human Services, interested congressional committees, and other parties. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-7118. Another contact and key contributors are listed in appendix IV. We examined (1) the extent to which Protection and Advocacy agencies (P&As) engage in litigation related to deinstitutionalization on behalf of individuals with developmental disabilities, (2) how P&As have communicated with parents and legal guardians in deinstitutionalization lawsuits, and (3) the role, if any, that P&As have played in monitoring the health and well-being of individuals transferred from institutions to community settings within the context of these lawsuits. To determine the extent to which P&As engage in litigation related to deinstitionalization on behalf of individuals with developmental disabilities, we compared data from several sources and consulted with national and state organizations because there is no single, national source of information on P&A litigation activities. We analyzed information from two key studies that provide extensive information on deinstitutionalization lawsuits, interviewed the authors of these studies, and examined information on lawsuits provided by the National Association of Protection & Advocacy Systems, Inc. (NAPAS) and Voice of the Retarded (VOR). We also interviewed officials from the Administration on Developmental Disabilities (ADD) in the Administration for Children and Families in the Department of Health and Human Services (HHS), NAPAS, the National Association of State Directors of Developmental Disabilities Services, and the VOR; representatives of several other family advocacy organizations, including the Arc of the United States; and P&A officials in the three states. From these sources, we compiled a national list of 24 deinstitutionalization lawsuits confirmed by NAPAS or state P&As that P&As filed, joined, or intervened in on behalf of individuals with developmental disabilities from 1975 through 2002. (See app. II for a list of all 24 cases identified.) From the national list we identified six lawsuits in three states—California, Maryland, and Pennsylvania—to study in more detail. National organizations that we consulted indicated that these states’ P&As are more active in deinstitutionalization litigation. In addition, we analyzed research on national trends in litigation for institutionalized individuals with developmental disabilities, consulted individuals knowledgeable about P&A deinstitutionalization lawsuits, and examined aggregate and state-specific ADD data from 1999 through 2001 on P&A litigation services provided to this population. To determine how P&As communicated with parents and legal guardians of individuals with developmental disabilities in deinstitutionalization lawsuits, we focused on the six lawsuits in California, Maryland, and Pennsylvania. We reviewed class action notification requirements for plaintiffs in federal and state courts and analyzed settlement agreements and other documents related to the six lawsuits. We also discussed the extent of P&A communication with individuals potentially affected by class action litigation with P&A officials and parent representatives in these states. Finally, to determine the role P&As play in monitoring individuals who have been moved from institutions to community settings, we reviewed the authority P&As have under the Developmental Disabilities Assistance and Bill of Rights Act of 2000 to protect and advocate the rights of individuals with developmental disabilities. We interviewed P&A officials in the three states about their roles and responsibilities and reviewed applicable deinstitutionalization settlement agreements and related documentation that they provided. We also interviewed officials from these states’ developmental disabilities services agencies who have primary responsibility for ensuring the quality of services provided to individuals with developmental disabilities. We did not attempt to assess the effectiveness of P&A and state agencies’ quality monitoring efforts nor to generalize our study findings to P&As nationwide. We did our work from October 2002 through September 2003 in accordance with generally accepted government auditing standards. P&A intervened. Reviewed by GAO. In addition to the person named above, key contributors to this report were Anne Montgomery, Carmen Rivera-Lowitt, George Bogart, and Elizabeth T. Morrison. Long-Term Care: Federal Oversight of Growing Medicaid Home and Community-Based Waivers Should Be Strengthened. GAO-03-576. Washington, D.C.: June 20, 2003. Children with Disabilities: Medicaid Can Offer Important Benefits and Services. GAO/T-HEHS-00-152. Washington, D.C.: July 12, 2000. Mental Health: Improper Restraint or Seclusion Use Places People at Risk. GAO/HEHS-99-176. Washington, D.C.: September 7, 1999. Adults with Severe Disabilities: Federal and State Approaches for Personal Care and Other Services. GAO/HEHS-99-101. Washington, D.C.: May 14, 1999. Medicaid: Oversight of Institutions for the Mentally Retarded Should Be Strengthened. GAO/HEHS-96-131. Washington, D.C.: September 6, 1996. Medicaid: Waiver Program for Developmentally Disabled Is Promising but Poses Some Risks. GAO/HEHS-96-120. Washington, D.C.: July 22, 1996.
Congress established the Protection and Advocacy system in 1975 to protect the rights of individuals with developmental disabilities, most of whom have mental retardation. Protection and Advocacy agencies (P&A) use investigative and legal activities to advocate on behalf of these individuals. Deinstitutionalization has refocused delivery of care to this population over the last several decades from large public institutions to community settings. Refocusing service delivery resulted from (1) the desire to deliver care in the most integrated setting and to control costs and (2) the outcomes of deinstitutionalization lawsuits brought by P&As and others. Some parents have raised concerns that P&As emphasize these suits over other activities, inadequately inform them of family members' inclusion in the suits, and do not adequately monitor individuals after their transfer to the community. GAO was asked to review the extent to which P&As engage in lawsuits related to deinstitutionalization of these individuals, how P&As communicate with affected parents and guardians in these suits, and the role P&As have played in monitoring the well-being of individuals transferred to the community. GAO compiled a national list of lawsuits related to deinstitutionalization involving P&As and reviewed the suits and related activities in three states--California, Maryland, and Pennsylvania. Lawsuits related to deinstitutionalization brought on behalf of persons with developmental disabilities are a small part of P&As' overall activities for this population. GAO identified 24 such lawsuits that P&As filed, joined, or intervened in from 1975 through 2002. During the same period, P&As filed or intervened in 6 of these lawsuits in the three states GAO reviewed--California, Maryland, and Pennsylvania. Three of the 6 were settled as class actions; the other 3 were intended, but not settled, as class actions. One is ongoing, one was dismissed, and one was settled by multiparty agreement. P&As' communications with parents and guardians regarding the lawsuits in the three states were consistent with federal rules. For the three suits settled as class actions, P&As complied with the requirement to provide notice to all class members when a settlement agreement is proposed to the court. Such notice was not required in the other three cases, which were not class actions. Representatives of some parent groups told GAO that parents and guardians were dissatisfied with the extent of P&A communication with them before a settlement was proposed, citing problems such as not receiving notice of a family member's inclusion in the class, which the parent or guardian opposed. P&As in the three states told GAO they did not communicate with every person potentially affected by the six lawsuits before a proposed settlement agreement, although they did communicate with organizations representing some parents and guardians during that time. However, even if P&As had made such notification, under the applicable federal rule of civil procedure, an individual has no explicit right to opt out of the class in this type of case. P&As in the three states assumed various roles in monitoring the health and well-being of individuals transferred to community settings in four of the five resolved lawsuits we reviewed, although state developmental disabilities services agencies have the primary responsibility for ensuring the quality of services provided to these individuals. P&As' roles varied with the circumstances of the lawsuits and the initiatives P&As in the three states undertook using their authority to protect and advocate the rights of individuals with developmental disabilities. For example, although the three class action settlement agreements did not specify monitoring roles, the P&As assumed roles, such as reviewing information about the quality of community services that the settlement agreements required the states to develop and reviewing care plans of individuals who had been transferred. Representatives of some parent groups told GAO that parents and guardians have been dissatisfied with the adequacy of the P&As' monitoring role in community placements, while representatives of other parent groups said they generally supported the P&A monitoring role. The Administration for Children and Families said GAO's analysis of the three P&As' involvement in deinstitutionalization lawsuits is thorough and the P&As GAO reviewed said that the report is accurate.
For fiscal year 2007, IHS projected a user population of about 1.5 million individuals, or about 35 percent of the population who identified themselves as American Indian or Alaska Native in the 2000 U.S. Census. Not all persons self-identifying as American Indians and Alaska Natives in the U.S. Census are members of federally recognized tribes or descendents of such members; therefore they are not all eligible for IHS services. However, more than half of the federally recognized American Indian and Alaska Native population does not permanently reside on a reservation and therefore may have limited or no access to IHS services because of their distance from IHS-funded facilities. In addition to its headquarters in Rockville, Maryland, IHS consists of a system of IHS-funded facilities organized into 12 geographic areas of various sizes and containing different types of facilities. Each of the 12 areas has an area office, an administrative body that may include an area director, a chief medical officer, and other staff who oversee the area’s budget and programs. See figure 1 for a map of the counties included in the 12 IHS areas. These facilities are IHS-operated, tribally operated, or overseen by the UIHP. IHS areas include more than 650 of these IHS-funded health care facilities, including hospitals, health centers, health stations, and UIHP facilities. These facilities mainly offer primary care to small, rural populations, with a limited number of larger health care facilities providing specialty care, such as treatment of HIV/AIDS. The types of facilities in each area vary. For example, the California area has no IHS-funded hospitals, while the Aberdeen area has nine small hospitals. The estimated IHS user population in each of the 12 areas ranges from about 24,000 to about 310,000 (see table 1). For fiscal year 2006, Congress appropriated approximately $2.7 billion to IHS to primarily provide direct care at IHS-funded facilities and to purchase care outside of IHS through contracts. From this appropriation, IHS also funds public health nursing, health education, and other functions. In addition, in fiscal year 2006, IHS received reimbursements of $681 million from Medicare, Medicaid, and private health insurance, with Medicare and Medicaid contributing almost 90 percent of those reimbursements. These reimbursements were for treatment at IHS-funded facilities of patients who were eligible for Medicare and Medicaid, in addition to IHS health care. More than 50 percent of IHS’s budget supports tribally operated facilities and around 1 percent supports UIHP facilities. Out of the total appropriated for services, approximately $500 million was designated for contract health services. For services that IHS-funded facilities cannot provide, the contract health services funding is used to purchase care for eligible American Indians and Alaska Natives through contracts with outside providers. For example, contract health services money has been used to purchase specialty care that may not be available at a patient’s local IHS-funded facility, such as behavioral health care. While IHS tracks the overall costs of providing health services, it does not itemize those costs by disease; therefore the agency does not track the cost for its facilities to provide HIV/AIDS prevention and treatment services. According to HHS, efforts are under way—primarily by CDC—to fund prevention programs to educate people at highest risk, as well as the general public, about HIV/AIDS and preventing or reducing their risk. CDC reports that HIV prevention programs can include strategies such as the following. HIV testing and counseling. According to CDC, individuals at risk for HIV should be offered testing and counseling so that they can be aware of their status and take steps to protect their own health and that of their partners. Testing is a key HIV prevention strategy because, as CDC estimates, more than half of HIV infections are transmitted by individuals who are unaware of their infection. Recently available HIV rapid test results are available the same day, in contrast to traditional lab-based testing that can take up to 2 weeks to provide results. Thus rapid testing can help ensure that individuals receive their test result. Moreover, because rapid tests do not require lab facilities or highly trained staff, this type of test can expand access to testing in both clinical and nonclinical settings; however, rapid tests are more expensive than lab-based tests. In addition to testing, counseling services offer patients ways to eliminate or reduce their risk for HIV infection. Partner notification. Sexual or needle-sharing partners of HIV-positive individuals have been exposed to HIV and may be infected. Partner notification services attempt to locate these individuals based on information provided by the patient to provide counseling, education, and other services to prevent infection or, if the individual is infected, provide referrals to care. Health education and risk reduction. Health education provides individuals with the skills and information necessary to avoid or reduce behaviors that put them at risk for HIV infection. Health education services can include individual, group, school, and community interventions, as well as outreach to HIV-positive individuals and HIV- negative individuals at high risk. These services can also include health communication and public information programs for individuals at high risk and the general public. Risk reduction activities can include condom distribution and needle exchange programs. HHS issues guidelines for the medical management of HIV and issues surrounding HIV infection. The guideline documents are periodically reviewed and updated by panels of HIV experts, because concepts relevant to management of HIV change rapidly. The recommended treatment for HIV is a combination of three or more drugs, called Highly Active Antiretroviral Therapy (HAART). HAART is used to slow the progression of HIV/AIDS and has reduced the number of HIV/AIDS deaths, but it may have side effects and requires adherence to complicated drug regimens. Additionally, although these drugs can treat HIV infection, HIV cannot be cured. A 2004 Kaiser Family Foundation report estimated the annual cost for providing these drugs was between $10,000 and $12,000 per patient. Beyond drug regimens, patients with HIV/AIDS may require additional specialized care. According to CDC, proper management of HIV/AIDS involves a complex array of behavioral, psychosocial, and medical services, and therefore referral to a health care provider or facility experienced in caring for HIV-infected patients is advised. Treatment must be tailored to the patient’s needs and may include mental health services, substance abuse services, and medical case management, including treatment adherence services. Patients with HIV/AIDS may also require support services, such as housing or transportation assistance. Patients with HIV/AIDS may face barriers to care. A 1998 study reported that patients with HIV/AIDS in both rural and urban areas experienced barriers to treatment services including a lack of knowledge about the disease, insufficient financial resources, and a lack of employment opportunities. Moreover, the study found that patients with HIV/AIDS in rural areas—compared to their urban counterparts—reported significantly greater need to travel long distances to medical facilities and personnel; a shortage of adequately trained medical and mental health professionals; a lack of personal or public transportation; and community stigma toward people living with HIV. American Indians and Alaska Natives suffer from HIV/AIDS at higher rates than whites and from a range of other medical conditions at higher rates than the general population. CDC estimated that in 2005, a total of 1,581 American Indians and Alaska Natives were living with AIDS in the 50 states and the District of Columbia. CDC’s 2005 surveillance data also showed that of individuals diagnosed with AIDS from 1997 through 2004, American Indians and Alaska Natives died sooner after diagnosis than did individuals of all other races and ethnicities except blacks. In addition, women accounted for 24 percent of the estimated numbers of American Indians and Alaska Natives living with AIDS in 2005, compared with 12.5 percent for whites. The data also showed that the 10 states with the highest number of American Indians and Alaska Natives living with AIDS in 2005 were: (1) California, (2) Arizona, (3) Oklahoma, (4) Washington, (5) New York, (6) Alaska, (7) North Carolina, (8) New Mexico, (9) Minnesota, and (10) Texas. CDC’s estimate of the number of American Indians and Alaska Natives living with AIDS in the 12 IHS areas, which do not cover the entire United States, was 872 in 2005 (see table 2). HIV/AIDS is one of many health concerns facing American Indians and Alaska Natives. While American Indians and Alaska Natives have the third highest rate of HIV/AIDS after blacks and Hispanics, the disease is not one of the top 10 leading causes of death for this population. Some of the major health concerns facing the population include diabetes; heart, liver, and cardiovascular diseases; cancer; unintentional injuries; obesity; substance abuse; and suicide. Given these numerous health concerns, as well as challenges related to poverty and unemployment, the National Alliance of State & Territorial AIDS Directors report that making HIV/AIDS a priority is often difficult for many American Indian and Alaska Native communities. Although HIV/AIDS is not among the major health concerns for the population, American Indians and Alaska Natives experience high rates of risk factors for HIV infection, such as sexually transmitted diseases and poverty-related conditions. According to 2005 CDC surveillance data by race or ethnicity, American Indians and Alaska Natives had the second highest rates of gonorrhea and chlamydia and the third highest rate of syphilis. CDC notes that these rates suggest that the sexual behaviors that facilitate the spread of HIV are relatively common among American Indians and Alaska Natives. In addition to sexually transmitted diseases, alcohol and drug abuse—which are prevalent in the American Indian and Alaska Native community—are risk factors for HIV transmission. Moreover, conditions related to poverty, such as lower levels of education and poorer access to health care, may increase the risk for HIV infection. During 2002 through 2004, approximately one quarter of American Indians and Alaska Natives―about twice the national average―were living in poverty. American Indians and Alaska Natives also have poorer access to health care than other racial and ethnic groups, with 21 percent of American Indians and Alaska Natives lacking a usual source of medical care, compared to 18 percent of whites in 2004. Furthermore, American Indians and Alaska Natives may be less likely to be tested for HIV than persons of other racial and ethnic groups because of location and confidentiality concerns. For example, those who live in rural areas may be less likely to be tested for HIV because of limited access to testing. While access to preventive services, such as testing, is a problem for rural populations in general, more American Indians and Alaska Natives, compared with persons of other races and ethnicities, resided in rural areas at the time of their AIDS diagnosis. Also, American Indians and Alaska Natives may be less likely to seek testing because of concerns about confidentiality in close-knit communities, where someone who seeks testing is likely to encounter a friend, relative, or acquaintance at the local health care facility. Many American Indians and Alaska Natives have health insurance coverage and may choose to access services outside of IHS. According to IHS, about 55 percent of the IHS user population has some form of public or private coverage. Of this, about 43 percent are eligible for Medicaid or Medicare. Depending on their eligibility and resources, American Indians and Alaska Natives may have access to health care at facilities available to the general population, such as public or private hospitals and community health centers. For HIV/AIDS care, American Indians and Alaska Natives may also access services at Ryan White-funded facilities. The Ryan White Program provides funding to states, territories, metropolitan areas, and other public or private nonprofit entities to provide health care, medications, and support services to more than 500,000 medically underserved individuals and families affected by HIV or AIDS, including American Indians and Alaska Natives. Specifically, services include outpatient medical and dental care, prescription drugs, case management, home health care, and hospice care. IHS area officials reported that HIV/AIDS prevention services were generally available in all 12 areas. HIV/AIDS education was available in every IHS area. Testing services were also available in every IHS area, though the type and extent of the services varied. In addition to education and testing services, officials in some areas mentioned that some facilities provided other services as part of their HIV/AIDS prevention activities, such as condom distribution and partner notification. Officials from IHS area offices reported that HIV/AIDS education services were offered in all 12 areas. Education was provided by a variety of staff, including practitioners, such as physicians and nurses, during medical appointments; tribal health educators; and community health representatives, in various settings, including IHS-funded facilities, tribal health departments, schools, health fairs, and prisons. For example, one provider said that she held bingo nights at an UIHP facility, beginning the evening with an HIV education speaker or presentation. Two tribal health educators and an UIHP official said that they played quiz show games with youth to teach them about HIV/AIDS. IHS officials and tribal health educators noted that HIV/AIDS education materials were available; however, there were challenges with using these materials. Officials in four areas—Albuquerque, Oklahoma City, Portland, and Tucson—noted concerns with the cultural appropriateness of HIV/AIDS education materials. Two tribal health educators reported using materials from sources outside of IHS, such as the American Red Cross and Advocates for Youth; however, they modified their presentations to make them more appropriate and easy to understand. For example, the tribal educators mentioned that they modified the wording of an HIV prevention curriculum’s activity to make it more relevant to their groups. Additionally, one area official said that educators had to revise the materials to a reading level where they could be understood by the target audience. Despite these education efforts, some IHS officials and advocacy groups noted that misconceptions about HIV/AIDS remained among some in the American Indian and Alaska Native community—for example, that the disease could be contracted from a toilet seat or that only men who have sex with men could become infected. According to IHS officials and service providers, HIV testing services were offered in all 12 IHS areas, but some officials said that services were not available at all facilities. Additionally, the type of testing that was available varied. IHS officials reported that HIV testing was offered primarily to pregnant women and those at high risk for HIV/AIDS. IHS HIV testing services included both lab-based and rapid tests, with officials in 9 IHS areas—Aberdeen, Alaska, Albuquerque, Billings, California, Nashville, Oklahoma City, Phoenix, and Portland—reporting that rapid testing was available in one or more of their facilities. IHS officials reported advantages to rapid testing, including the ability to test pregnant women who were in labor or patients presenting in emergency rooms, and to provide quick results to patients at high risk who are unlikely to return to the facility to receive the results from lab-based tests. Officials in three areas—Aberdeen, Phoenix, and Tucson—reported that some patients do not return to pick up their lab-based HIV test results. However, some IHS officials reported that cost was a barrier to adopting the more expensive rapid testing and that staff required additional training to administer the tests. To address this concern, one area reported providing funding for training on rapid HIV testing for clinical staff. Although testing services were available to some extent in all areas, some IHS officials and advocacy groups expressed concern that some American Indians and Alaska Natives were not being tested for HIV. Officials in one area reported that some IHS health care providers may not feel comfortable discussing sexuality, and as a result they may not offer testing to patients in groups at high risk. An official in another area reported that, given more prevalent health concerns, providers did not always discuss HIV/AIDS. An official in a third area said that, while IHS-funded facilities offer testing, there was still a segment of the population who were not tested until they showed symptoms of HIV. In addition, according to IHS officials and advocacy groups, some American Indians and Alaska Natives did not seek or declined testing within IHS due to lack of awareness about the disease, confidentiality concerns, and stigma surrounding the disease. For example, one UIHP facility staff member said that she usually referred individuals to the county health department for HIV testing because the facility’s clients were afraid that their test results would be revealed to IHS staff, many of whom the patients know. An official at one organization that provides case management to American Indians and Alaska Natives reported that some patients did not seek testing because there was a local belief that by being tested one was wishing the disease on oneself. In addition to HIV testing and education services, IHS officials described some other services that were provided as part of their HIV/AIDS prevention activities. Some IHS officials mentioned that IHS facilities were involved in partner notification. For example, an official from one area said that public health nurses notified partners of patients with HIV and other sexually transmitted diseases, followed up with the partners about their testing needs, and provided additional counseling. In addition, officials in some areas mentioned that facilities in their areas distributed condoms as part of their HIV/AIDS prevention activities. For example, a provider in one area made condoms available in every exam room at the IHS facilities in the area so that patients were no longer ashamed or embarrassed about seeing, taking, or asking about condoms. Finally, officials in two areas mentioned that tribes in their area had a needle exchange program. While some IHS facilities offered HIV/AIDS treatment services, area officials reported that most patients received treatment from providers at facilities outside of IHS. Five IHS-funded hospitals regularly treated patients and had staff dedicated to providing HIV/AIDS treatment. While other facilities provided limited HIV/AIDS treatment, most relied on outside providers, such as Ryan White-funded facilities or local hospitals. Area officials reported that some patients with HIV/AIDS may not access or continue treatment due to a variety of reasons, including lack of transportation. Of the more than 45 IHS-funded hospitals, officials from IHS headquarters and facilities identified 5 hospitals that regularly treated patients with HIV/AIDS. According to IHS headquarters, 3 facilities have committed the most resources to sustaining HIV/AIDS treatment services: the Alaska Native Medical Center in the Alaska area, the Gallup Indian Medical Center in the Navajo area, and the HIV Center of Excellence at the Phoenix Indian Medical Center in the Phoenix area. For example, the Phoenix Indian Medical Center had staff such as a physician experienced in treating HIV/AIDS and an HIV clinical pharmacist providing HIV/AIDS treatment services. IHS officials reported that treatment services were also regularly provided at 2 other IHS facilities: the Albuquerque Indian Hospital in the Albuquerque area and the W.W. Hastings Indian Medical Center in the Oklahoma City area. These 2 facilities each relied on one physician who regularly treated patients with HIV/AIDS. Both physicians reported seeing patients with HIV/AIDS for over 15 years and continue to provide services to patients. Officials from all five of the facilities that regularly treated patients with HIV/AIDS said that some patients received HIV/AIDS services from outside providers. In some cases, the IHS facilities coordinated with outside providers for some HIV/AIDS services. For example, patients at the Gallup Indian Medical Center and the Albuquerque Indian Hospital received case management services outside of IHS. The Gallup Indian Medical Center worked with staff from the Navajo AIDS Network, an organization that provides case management services—including in the Navajo language— to American Indians and Alaska Natives with HIV/AIDS. In addition, the Phoenix Indian Medical Center’s HIV pharmacist arranged for Medicaid- eligible patients to receive their HIV drugs by mail through a pharmacy outside of IHS. Several area officials reported that some of the other IHS facilities provided limited HIV/AIDS treatment services, but most facilities referred patients to outside providers. For example, some facilities had physicians with experience treating HIV/AIDS or provided case management services to patients with HIV/AIDS. According to officials from five areas— Aberdeen, Alaska, Bemidji, Nashville, and Oklahoma City—the facilities that provided HIV/AIDS treatment services were generally larger IHS- funded facilities, particularly hospitals. For example, IHS reported that at least 13 physicians with experience treating HIV/AIDS worked at IHS hospitals other than the five facilities that regularly provided care. At some facilities that did not regularly offer HIV/AIDS treatment services, staff made efforts to provide care when needed. For example, officials in two areas—Albuquerque and Bemidji—reported that staff at a facility in their area had used a hotline to obtain HIV/AIDS treatment information. In addition, one UIHP facility in the California area, which has no IHS-funded hospitals, contracted with an HIV/AIDS specialist outside of IHS to provide treatment services at the facility once a week. However, officials reported that none of the other facilities in the area provided HIV/AIDS treatment services. Officials from all 12 areas reported that some patients with HIV/AIDS were treated outside of IHS, citing a variety of settings. Officials in 8 areas— Aberdeen, Alaska, Albuquerque, Billings, Nashville, Oklahoma City, Phoenix, and Tucson—reported that patients in their areas received care from Ryan White-funded facilities. According to HRSA, in 2005 more than 950 of the 2,463 Ryan White-funded facilities across the United States provided services to one or more American Indians or Alaska Natives with HIV/AIDS. In addition, IHS officials noted that American Indians and Alaska Natives may receive HIV/AIDS treatment services from local hospitals or from physicians in private practice. Some patients who receive HIV/AIDS treatment outside of IHS may continue to receive other types of health care from IHS-funded facilities. For example, one IHS official reported that these patients might see a specialist quarterly or once a year for their HIV/AIDS treatment services and an IHS provider for routine care. An official for another area reported that of those patients referred to other providers for HIV/AIDS services, most stay with their IHS-funded facility for their other health care services. IHS area officials noted several reasons why IHS-funded facilities in their areas did not provide HIV/AIDS treatment services. Too few patients and limited experience. Officials for six areas— Albuquerque, Bemidji, California, Nashville, Oklahoma City, and Portland—reported that some facilities did not provide treatment because they did not have any patients known to have the disease. Officials for eight areas—Aberdeen, Albuquerque, Bemidji, Billings, Oklahoma City, Phoenix, Portland, and Tucson—reported that providers’ lack of training or experience related to HIV/AIDS were reasons why HIV/AIDS treatment was not provided at some facilities. Chief medical officers from four of the eight areas cited frequently changing HIV/AIDS treatment protocols as a reason why providers might not feel comfortable treating the disease. Allocation of limited resources. Officials for 10 areas—Aberdeen, Albuquerque, Bemidji, Billings, California, Nashville, Oklahoma City, Phoenix, Portland, and Tucson—cited limited IHS resources, such as funding or staff, as a reason for referring patients outside IHS. Officials for 4 of the 10 areas said that, given IHS’s limited resources, including limited staff, and the availability of HIV/AIDS services outside of IHS, they preferred to refer patients to outside providers rather than provide HIV/AIDS treatment services in-house. In addition, officials in 4 of the 10 areas reported that their pharmacies do not provide HAART because of the high cost of the HIV/AIDS drugs or because too few patients seek those drugs from IHS. Other health concerns. Officials in six areas—Alaska, Bemidji, Billings, Oklahoma City, Portland, and Tucson—mentioned that their areas have other health concerns that take precedence over HIV/AIDS. Among the other more prevalent health concerns mentioned were unintentional injuries and diabetes. Moreover, while area officials listed diabetes, accidents, and heart disease as some of the 10 leading causes of death in their area, only the California area officials listed HIV/AIDS as one of the 10 leading causes of death in their area. See appendix I for the reasons why IHS-funded facilities did not provide HIV/AIDS treatment services, by area. IHS area officials and facility providers noted that some American Indians and Alaska Natives with HIV/AIDS may not access or continue care, even if treatment is available, for reasons such as concerns about confidentiality and lack of transportation. Officials in the 12 IHS areas reported that patients’ concerns with confidentiality and stigma in close- knit communities were reasons why some patients did not access care from IHS. Officials from 7 areas—Aberdeen, Alaska, Bemidji, California, Navajo, Oklahoma City, and Portland—reported that some patients with HIV/AIDS were concerned that their friends or relatives who work or access services at IHS would learn about their HIV status. For example, an official for one rural area said that in villages many people are related to IHS community health aides and other service providers, which increases patients’ reluctance to disclose their HIV status and seek HIV/AIDS treatment services. Officials in 7 areas—Alaska, Albuquerque, Bemidji, Billings, Oklahoma City, Phoenix, and Tucson—mentioned that distance to HIV/AIDS treatment services or lack of transportation may affect American Indians’ and Alaska Natives’ ability to access care. Officials in one area reported knowing of an isolated region in one state in the area that had “clear unmet needs” because it was located 300 miles from any facilities—IHS or otherwise—with HIV/AIDS treatment services. In one urban area, an official reported that relying on public transportation was a barrier to treatment because it can be unreliable and unaffordable for many clients. Area officials in Albuquerque, Phoenix, and Navajo said that patients may not access treatment because of cultural reasons. One official noted that traditional healing practices may take priority over western medicine. In addition, this official noted that, in some communities, family obligations may also take priority over treatment. For example, he said that a patient may miss an appointment because he or she chose to be with a sick family member in another state. Some area officials reported that there were other factors that could affect a patient’s continuation of HIV/AIDS treatment, such as alcohol or drug abuse or lack of housing. Officials for five areas—Alaska, Albuquerque, Navajo, Phoenix, and Tucson—cited concerns with patients with HIV/AIDS adhering to their treatment programs, partly due to substance abuse. In addition, officials for two IHS-funded facilities noted that housing can be of concern. For example, one of the facility officials said that an HIV-positive patient from a small community moved to a nearby city because the patient’s home lacked both heat and water, compromising the patient’s health. See appendix I for the reasons why American Indians and Alaska Natives with HIV/AIDS did not access or continue HIV/AIDS treatment services, by area. IHS has undertaken outreach and planning, capacity building, and surveillance initiatives related to HIV/AIDS. These initiatives are overseen by national and area-level officials. IHS’s outreach and planning initiatives include an HIV/AIDS program Web site, an HIV listserv, and a national HIV/AIDS administrative work plan. IHS has also carried out several initiatives aimed at building the capacity of its providers to offer HIV/AIDS-related prevention and treatment services, such as training of health care providers and implementation of an HIV-related data system. Additionally, IHS has undertaken initiatives related to improving the surveillance of HIV/AIDS in the American Indian and Alaska Native population by developing a prenatal HIV screening measure and an early detection surveillance system. IHS initiatives related to HIV/AIDS are overseen by a national IHS HIV/AIDS program official or by officials at the area level. The national program is coordinated by an HIV/AIDS principal consultant, the only full- time staff member dedicated to these initiatives. Program initiatives are often conducted in collaboration with other IHS personnel and are supported by IHS and outside funding sources, such as the Minority AIDS Initiative. These additional IHS personnel who support IHS’s HIV/AIDS initiatives do so in addition to other full-time duties. At the area level, HIV/AIDS initiatives are often conducted as part of broader health promotion and disease prevention programs. Officials in five areas reported having staff who acted as area HIV/AIDS coordinators, but few of those staff worked full-time on HIV/AIDS and all had other duties, such as providing behavioral health education or acting as a consultant for other diseases. IHS has undertaken several outreach and planning initiatives, including an HIV/AIDS program Web site, an HIV listserv, and a national HIV/AIDS administrative work plan. Web site. A public Web site, www.ihs.gov/MedicalPrograms/HIVAIDS, contains information on American Indian and Alaska Native-related HIV/AIDS research, HIV/AIDS clinical treatment guidelines, and links to other relevant Web sites, including grant and funding resources. It was launched March 21, 2007, on the first National Native HIV/AIDS Awareness Day. As of July 2007, the Web site had more than 3,500 unique visitors, an average of 36 visits a day, according to an IHS official. Listserv. The HIV/AIDS principal consultant operates an HIV listserv, which e-mails information of general interest to those working with American Indians and Alaska Natives with HIV/AIDS, such as HIV/AIDS- related news, recent research, and funding opportunities. An IHS official reported that the listserv included about 650 individuals, including American Indian and Alaska Native community members and officials from IHS, tribes, and American Indian and Alaska Native advocacy groups. HIV/AIDS administrative work plan. According to IHS, as of September 2007, a national IHS HIV/AIDS administrative work plan was nearing completion. The plan is intended to integrate multiple activities to help improve IHS surveillance, information sharing, and data collection. The plan will determine HIV/AIDS intervention priority areas, describe the activities to be conducted within each priority area, and identify key personnel and organizations with responsibility for each activity. The plan is also intended to be a 3-year administrative blueprint for further development and progression of the HIV/AIDS program. As of September 2007, the plan was in draft form and being circulated both within and outside of IHS for comment. The HIV/AIDS principal consultant said that the work plan would be finalized and issued in the fall of 2007. Collaboration with other organizations. IHS had signed or was developing memoranda of understanding with other organizations, including HRSA and the Substance Abuse and Mental Health Services Administration (SAMHSA), on various HIV/AIDS activities. IHS and HRSA have signed a 3-year memorandum of understanding to collaborate on multiple HIV/AIDS initiatives in an effort to decrease duplication of services, increase awareness of common resources, and improve coordination and quality of services to American Indians and Alaska Natives. IHS and SAMHSA were developing a memorandum of understanding to train IHS staff to conduct HIV/AIDS rapid testing. The memorandum was expected to be implemented in early 2008. In addition, six areas reported working with local organizations on HIV/AIDS initiatives. For example, an official in the Aberdeen area reported that the area has an HIV/AIDS task force consisting of clinical providers, community health representatives, and HIV coordinators from state health departments in the Aberdeen area. The taskforce is initiating an HIV strategic plan for the area. IHS also has carried out several initiatives aimed at building the capacity of providers to offer HIV/AIDS-related prevention and treatment services. HIV/AIDS collaborative training. IHS provides HIV/AIDS training for IHS-funded staff in 2-and-1/2-day sessions funded by HHS’s Minority AIDS Initiative. Since fiscal year 2005, the sessions have focused on HIV/AIDS behavioral health issues, capacity and partnership building, and related intervention strategies. Topics for training to be conducted during 2007 and 2008 include: reporting, data collection, best practice models, clinical practice issues, prevention policies and procedures, and culturally appropriate pre- and posttest counseling interview techniques. IHS also plans to use this funding to conduct a 1-day Traditional Healers Summit to discuss HIV/AIDS with traditional healers. IHS officials noted this would be the first training of this kind for any disease. Training IHS community health representatives. IHS also received funding from the Minority AIDS Initiative to provide community health representatives with HIV/AIDS-related training. These training sessions will be presented by health care professionals and will teach community health representatives about facts, fears, and public perceptions about sexually transmitted diseases, including HIV/AIDS. Community health representatives will also be coached on how to present this information on their reservations. These sessions were scheduled to take place in November 2007. Area-organized training and conferences. In addition to training overseen by the IHS HIV/AIDS Program, officials from eight area offices reported offering HIV/AIDS regional training sessions or conferences to tribal leaders, clinical providers, and community members. For example, the Aberdeen area holds an annual conference on HIV/AIDS where attendees learn about local resources, funding resources, and possible partnership opportunities with IHS, the state, and tribes. HIV/AIDS telemedicine support network. With Minority AIDS Initiative funding, the HIV Center of Excellence in the Phoenix Indian Medical Center created an HIV/AIDS telemedicine support network for health care providers in IHS-operated, tribally operated, and UIHP facilities to expand the quality and availability of HIV/AIDS communication, training, support, and expert consultation. An IHS official said that the goal of this network is to increase the availability of HIV/AIDS treatment by providing facilities with access to HIV/AIDS experts and consultants. The network is still in the developmental stages and, according to IHS, is initially being targeted to 16 IHS-funded facilities. HIV Management System. In September 2006, IHS implemented its HIV Management System (HMS), a data system intended to help clinical providers and case managers provide quality care to HIV/AIDS patients and those at risk for the disease. When a facility enters its data into HMS, the system can generate quality-of-care audit reports or send reminders to providers when patients with HIV/AIDS need care. IHS officials could not estimate how many facilities will use HMS, noting that participation is voluntary. As of October 2007, staff from 12 facilities had been trained in how to use the system. HMS originally was funded by the Minority AIDS Initiative; however, IHS did not receive funding for fiscal year 2007 to continue this system. IHS officials said that despite the loss of funding they will continue to support HMS with IHS resources, but that some of their efforts, such as the evaluation of the program, will have to be curtailed. Officials said they plan to reapply for funding for fiscal year 2008. Increased HIV testing. IHS also received funding in fiscal year 2007 from the Minority AIDS Initiative to continue to increase HIV screening at UIHP facilities. Seven awards of approximately $45,000 will be issued to urban facilities in order to enhance HIV testing, including rapid testing and standard lab-based testing, and to provide a more targeted effort to address HIV/AIDS prevention in some of the largest urban American Indian and Alaska Native populations in the United States. This initiative is expected to expand services to patients, build IHS’s testing capacity, and collect data about barriers to testing services. IHS has undertaken two initiatives to improve surveillance of HIV/AIDS in the American Indian and Alaska Native population. Prenatal HIV screening. In 2005, IHS implemented a new Government Performance and Results Act measure that examines the percentage of pregnant IHS patients screened for HIV in a year. The 2006 target for this measure was 55 percent of IHS’s pregnant patients screened for HIV within the last year; the actual percentage of patients screened was 65 percent. For 2007, IHS’s target was to ensure that the proportion of pregnant female patients screened for HIV did not decrease more than 1 percent from the 2006 level. For 2007, the percentages of pregnant women screened by IHS ranged from 48 percent to 88 percent among the areas, with an overall screening rate of 74 percent. Early detection surveillance system. With funding from the Minority AIDS Initiative, IHS is developing a national early warning system to detect increases in the rate of HIV infection for American Indian and Alaska Native populations at high risk. This initiative aims to enhance and improve screening for HIV in prenatal populations by examining a sample of IHS facilities from which data are collected electronically. From this sample, IHS wants to be able to detect any changes in the rates of HIV infection among pregnant women. In addition, the initiative includes conducting a knowledge, attitude, and practice survey of health care professionals on CDC’s new, broader HIV screening guidelines to identify misunderstandings and obstacles and accelerate the adoption of the new guidelines in IHS funded-facilities. An IHS official said that the survey was being developed and was expected to be completed by December 2007. The early surveillance initiative also seeks to analyze the rate of HIV screening among patients who have tested positive for a sexually transmitted disease, patients who have tested positive for other diseases that typically coexist with HIV/AIDS, and unique individuals screened for HIV in order to estimate the proportion of the IHS user population who are aware of their HIV status. We provided a draft of this report to HHS for comments from IHS, CDC, and HRSA. We received written comments from HHS. HHS substantially agreed with the findings of our report and offered technical comments to provide additional information or clarify specific findings, which we incorporated as appropriate. The letter included with HHS’s comments is reprinted in appendix II. Generally, HHS’s technical comments requested that we provide additional context about IHS’s capacity to provide HIV/AIDS prevention and treatment services. HHS commented that IHS is mainly a primary care system and generally relies on providers outside of IHS for HIV/AIDS treatment services. HHS stated that IHS generally refers patients with HIV/AIDS to outside providers as they do for other complex conditions, such as cancer and heart disease. In addition, HHS noted that the barriers to HIV/AIDS testing and misconceptions about the disease mentioned in this report are not unique to the American Indian and Alaska Native communities. We are sending copies of this report to the Secretary of Health and Human Services. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-7114 or ekstrandl@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. In addition to the contact named above, Karen Doran, Assistant Director; Catina Bradley; Adrienne Griffin; Christina Ritchie; Eden Savino; and Timothy Walker made key contributions to this report. Ryan White CARE Act: Changes Needed to Improve the Distribution of Funding. GAO-06-703T. Washington, D.C.: April 27, 2006. Ryan White CARE Act: AIDS Drug Assistance Programs, Perinatal HIV Transmission, and Partner Notification. GAO-06-681T. Washington, D.C.: April 26, 2006. Ryan White CARE Act: Improved Oversight Needed to Ensure AIDS Drug Assistance Programs Obtain Best Prices for Drugs. GAO-06-646. Washington, D.C.: April 26, 2006. HIV/AIDS: Changes Needed to Improve the Distribution of Ryan White CARE Act and Housing Funds. GAO-06-332. Washington, D.C.: February 28, 2006. Indian Health Service: Health Care Services Are Not Always Available to Native Americans. GAO-05-789. Washington, D.C.: August 31, 2005. Ryan White CARE Act: Factors that Impact HIV and AIDS Funding and Client Coverage. GAO-05-841T. Washington, D.C.: June 23, 2005.
American Indians and Alaska Natives have the third highest rate of HIV/AIDS diagnosis in the United States. They are also more likely than individuals with HIV/AIDS from other racial and ethnic groups to receive treatment at later stages of the disease and have shorter life spans. The Indian Health Service (IHS), located within the Department of Health and Human Services (HHS), provides health care services, including HIV/AIDS treatment, to eligible American Indians and Alaska Natives. IHS patients with HIV/AIDS may also receive care through other sources depending on their access to private health insurance or their eligibility for other federal health care programs, such as Medicare and Medicaid. GAO examined the extent to which IHS provided (1) HIV/AIDS prevention services and (2) HIV/AIDS treatment services. GAO also examined (3) what other HIV/AIDS-related initiatives IHS has undertaken. GAO reviewed documents and interviewed officials from IHS headquarters, area offices, and IHS-funded facilities, as well as advocacy groups. We also conducted site visits in two IHS areas. HIV/AIDS prevention services were generally available from IHS, but these services varied across the 12 IHS areas. HIV/AIDS education was provided in all areas in a variety of settings, such as IHS-funded facilities, schools, and health fairs. In addition to education, IHS offered HIV testing services in all areas; however, the type and extent of services varied. In addition, some IHS officials described other services that were provided as part of their HIV/AIDS prevention activities, such as condom distribution. According to IHS officials, HIV/AIDS treatment services, while offered at some IHS facilities, were generally received outside of IHS. Five IHS-funded hospitals, such as the Phoenix Indian Medical Center in Arizona, regularly treated patients. Although some other IHS facilities provided limited treatment services, most relied on outside providers. For example, IHS patients with HIV/AIDS might see a specialist outside of IHS every 3 months for their HIV/AIDS treatment services and an IHS provider for other routine care. IHS officials reported that most IHS facilities did not provide treatment services because they had few American Indian or Alaska Native patients known to have HIV/AIDS, had limited resources, focused on other health concerns, or their providers had limited training or experience treating the disease. Additionally, some patients may not access or continue treatment from IHS or outside providers due to concerns about confidentiality and lack of transportation to distant facilities. IHS has undertaken outreach and planning, capacity building, and surveillance initiatives related to HIV/AIDS. These initiatives are overseen by national and area-level IHS officials. The outreach and planning initiatives include an HIV/AIDS Web site and the development of a national HIV/AIDS administrative work plan. IHS has also undertaken several initiatives aimed at building the capacity of providers to offer HIV/AIDS-related prevention and treatment services, such as training of health care providers and implementation of an HIV/AIDS-related data system that can send providers reminders when patients with HIV/AIDS need care. Finally, IHS has undertaken initiatives related to improving the surveillance of HIV/AIDS in the American Indian and Alaska Native population by developing a prenatal HIV screening measure and an early detection surveillance system. GAO received written comments from HHS on a draft of this report. HHS substantially agreed with the findings of this report. HHS also offered technical comments to provide additional information or clarify specific findings, which we incorporated as appropriate.
A domestic bioterrorist attack is considered to be a low-probability event, in part because of the various difficulties involved in successfully delivering biological agents to achieve large-scale casualties. However, a number of cases involving biological agents, including at least one completed bioterrorist act and numerous threats and hoaxes, have occurred domestically. In 1984, a group intentionally contaminated salad bars in restaurants in Oregon with salmonella bacteria. Although no one died, 751 people were diagnosed with foodborne illness. Some experts predict that more domestic bioterrorist attacks are likely to occur. The burden of responding to such an attack would fall initially on personnel in state and local emergency response agencies. These “first responders” include firefighters, emergency medical service personnel, law enforcement officers, public health officials, health care workers (including doctors, nurses, and other medical professionals), and public works personnel. If the emergency required federal disaster assistance, federal departments and agencies would respond according to responsibilities outlined in the Federal Response Plan. Several groups, including the Advisory Panel to Assess Domestic Response Capabilities for Terrorism Involving Weapons of Mass Destruction (known as the Gilmore Panel), have assessed the capabilities at the federal, state, and local levels to respond to a domestic terrorist incident involving a weapon of mass destruction (WMD), that is, a chemical, biological, radiological, or nuclear agent or weapon. While many aspects of an effective response to a bioterrorism are the same as those for any disaster, there are some unique features. For example, if a biological agent is released covertly, it may not be recognized for a week or more because symptoms may not appear for several days after the initial exposure and may be misdiagnosed at first. In addition, some biological agents, such as smallpox, are communicable and can spread to others who were not initially exposed. These differences require a type of response that is unique to bioterrorism, including infectious disease surveillance, epidemiologic investigation, laboratory identification of biological agents, and distribution of antibiotics to large segments of the population to prevent the spread of an infectious disease. However, some aspects of an effective response to bioterrorism are also important in responding to any type of large-scale disaster, such as providing emergency medical services, continuing health care services delivery, and managing mass fatalities. Federal spending on domestic preparedness for terrorist attacks involving WMDs has risen 310 percent since fiscal year 1998, to approximately $1.7 billion in fiscal year 2001, and may increase significantly after the events of September 11, 2001. However, only a portion of these funds were used to conduct a variety of activities related to research on and preparedness for the public health and medical consequences of a bioterrorist attack. We cannot measure the total investment in such activities because departments and agencies provided funding information in various forms—as appropriations, obligations, or expenditures. Because the funding information provided is not equivalent, we summarized funding by department or agency, but not across the federal government (see apps. I and II). Research is currently being done to enable the rapid identification of biological agents in a variety of settings; develop new or improved vaccines, antibiotics, and antivirals to improve treatment and vaccination for infectious diseases caused by biological agents; and develop and test emergency response equipment such as respiratory and other personal protective equipment. Appendix I provides information on the total reported funding for all the departments and agencies carrying out research, along with examples of this research. The Department of Agriculture (USDA), Department of Defense (DOD), Department of Energy, Department of Health and Human Services (HHS), Department of Justice (DOJ), Department of the Treasury, and the Environmental Protection Agency (EPA) have all sponsored or conducted projects to improve the detection and characterization of biological agents in a variety of different settings, from water to clinical samples (such as blood). For example, EPA is sponsoring research to improve its ability to detect biological agents in the water supply. Some of these projects, such as those conducted or sponsored by the DOD and DOJ, are not primarily for the public health and medical consequences of a bioterrorist attack against the civilian population, but could eventually benefit research for those purposes. Departments and agencies are also conducting or sponsoring studies to improve treatment and vaccination for diseases caused by biological agents. For example, HHS’ projects include basic research sponsored by the National Institutes of Health to develop drugs and diagnostics and applied research sponsored by the Agency for Healthcare Research and Quality to improve health care delivery systems by studying the use of information systems and decision support systems to enhance preparedness for the delivery of medical care in an emergency. In addition, several agencies, including the Department of Commerce’s National Institute of Standards and Technology and the DOJ’s National Institute of Justice are conducting research that focuses on developing performance standards and methods for testing the performance of emergency response equipment, such as respirators and personal protective equipment. Federal departments’ and agencies’ preparedness efforts have included efforts to increase federal, state, and local response capabilities, develop response teams of medical professionals, increase availability of medical treatments, participate in and sponsor terrorism response exercises, plan to aid victims, and provide support during special events such as presidential inaugurations, major political party conventions, and the Superbowl. Appendix II contains information on total reported funding for all the departments and agencies with bioterrorism preparedness activities, along with examples of these activities. Several federal departments and agencies, such as the Federal Emergency Management Agency (FEMA) and the Centers for Disease Control and Prevention (CDC), have programs to increase the ability of state and local authorities to successfully respond to an emergency, including a bioterrorist attack. These departments and agencies contribute to state and local jurisdictions by helping them pay for equipment and develop emergency response plans, providing technical assistance, increasing communications capabilities, and conducting training courses. Federal departments and agencies have also been increasing their own capacity to identify and deal with a bioterrorist incident. For example, CDC, USDA, and the Food and Drug Administration (FDA) are improving surveillance methods for detecting disease outbreaks in humans and animals. They have also established laboratory response networks to maintain state-of-the-art capabilities for biological agent identification and characterization of human clinical samples. Some federal departments and agencies have developed teams to directly respond to terrorist events and other emergencies. For example, HHS’ Office of Emergency Preparedness (OEP) created Disaster Medical Assistance Teams to provide medical treatment and assistance in the event of an emergency. Four of these teams, known as National Medical Response Teams, are specially trained and equipped to provide medical care to victims of WMD events, such as bioterrorist attacks. Several agencies are involved in increasing the availability of medical supplies that could be used in an emergency, including a bioterrorist attack. CDC’s National Pharmaceutical Stockpile contains pharmaceuticals, antidotes, and medical supplies that can be delivered anywhere in the United States within 12 hours of the decision to deploy. The stockpile was deployed for the first time on September 11, 2001, in response to the terrorist attacks on New York City. Federally initiated bioterrorism response exercises have been conducted across the country. For example, in May 2000, many departments and agencies took part in the Top Officials 2000 exercise (TOPOFF 2000) in Denver, Colorado, which featured the simulated release of a biological agent. Participants included local fire departments, police, hospitals, the Colorado Department of Public Health and the Environment, the Colorado Office of Emergency Management, the Colorado National Guard, the American Red Cross, the Salvation Army, HHS, DOD, FEMA, the Federal Bureau of Investigation (FBI), and EPA. Several agencies also provide assistance to victims of terrorism. FEMA can provide supplemental funds to state and local mental health agencies for crisis counseling to eligible survivors of presidentially declared emergencies. In the aftermath of the recent terrorist attacks, HHS released $1 million in funding to New York State to support mental health services and strategic planning for comprehensive and long-term support to address the mental health needs of the community. DOJ’s Office of Justice Programs (OJP) also manages a program that provides funds for victims of terrorist attacks that can be used to provide a variety of services, including mental health treatment and financial assistance to attend related criminal proceedings. Federal departments and agencies also provide support at special events to improve response in case of an emergency. For example, CDC has deployed a system to provide increased surveillance and epidemiological capacity before, during, and after special events. Besides improving emergency response at the events, participation by departments and agencies gives them valuable experience working together to develop and practice plans to combat terrorism. Federal departments and agencies are using a variety of interagency plans, work groups, and agreements to coordinate their activities to combat terrorism. However, we found evidence that coordination remains fragmented. For example, several different agencies are responsible for various coordination functions, which limits accountability and hinders unity of effort; several key agencies have not been included in bioterrorism-related policy and response planning; and the programs that agencies have developed to provide assistance to state and local governments are similar and potentially duplicative. The President recently took steps to improve oversight and coordination, including the creation of the Office of Homeland Security. Over 40 federal departments and agencies have some role in combating terrorism, and coordinating their activities is a significant challenge. We identified over 20 departments and agencies as having a role in preparing for or responding to the public health and medical consequences of a bioterrorist attack. Appendix III, which is based on the framework given in the Terrorism Incident Annex of the Federal Response Plan, shows a sample of the coordination efforts by federal departments and agencies with responsibilities for the public health and medical consequences of a bioterrorist attack, as they existed prior to the recent creation of the Office of Homeland Security. This figure illustrates the complex relationships among the many federal departments and agencies involved. Departments and agencies use several approaches to coordinate their activities on terrorism, including interagency response plans, work groups, and formal agreements. Interagency plans for responding to a terrorist incident help outline agency responsibilities and identify resources that could be used during a response. For example, the Federal Response Plan provides a broad framework for coordinating the delivery of federal disaster assistance to state and local governments when an emergency overwhelms their ability to respond effectively. The Federal Response Plan also designates primary and supporting federal agencies for a variety of emergency support operations. For example, HHS is the primary agency for coordinating federal assistance in response to public health and medical care needs in an emergency. HHS could receive support from other agencies and organizations, such as DOD, USDA, and FEMA, to assist state and local jurisdictions. Interagency work groups are being used to minimize duplication of funding and effort in federal activities to combat terrorism. For example, the Technical Support Working Group is chartered to coordinate interagency research and development requirements across the federal government in order to prevent duplication of effort between agencies. The Technical Support Working Group, among other projects, helped to identify research needs and fund a project to detect biological agents in food that can be used by both DOD and USDA. Formal agreements between departments and agencies are being used to share resources and knowledge. For example, CDC contracts with the Department of Veterans Affairs (VA) to purchase drugs and medical supplies for the National Pharmaceutical Stockpile because of VA’s purchasing power and ability to negotiate large discounts. Overall coordination of federal programs to combat terrorism is fragmented. For example, several agencies have coordination functions, including DOJ, the FBI, FEMA, and the Office of Management and Budget. Officials from a number of the agencies that combat terrorism told us that the coordination roles of these various agencies are not always clear and sometimes overlap, leading to a fragmented approach. We have found that the overall coordination of federal research and development efforts to combat terrorism is still limited by a number of factors, including the compartmentalization or security classification of some research efforts.The Gilmore Panel also concluded that the current coordination structure does not provide for the requisite authority or accountability to impose the discipline necessary among the federal agencies involved. The multiplicity of federal assistance programs requires focus and attention to minimize redundancy of effort. Table 1 shows some of the federal programs providing assistance to state and local governments for emergency planning that would be relevant to responding to a bioterrorist attack. While the programs vary somewhat in their target audiences, the potential redundancy of these federal efforts highlights the need for scrutiny. In our report on combating terrorism, issued on September 20, 2001, we recommended that the President, working closely with the Congress, consolidate some of the activities of DOJ’s OJP under FEMA. We have also recommended that the federal government conduct multidisciplinary and analytically sound threat and risk assessments to define and prioritize requirements and properly focus programs and investments in combating terrorism. Such assessments would be useful in addressing the fragmentation that is evident in the different threat lists of biological agents developed by federal departments and agencies. Understanding which biological agents are considered most likely to be used in an act of domestic terrorism is necessary to focus the investment in new technologies, equipment, training, and planning. Several different agencies have or are in the process of developing biological agent threat lists, which differ based on the agencies’ focus. For example, CDC collaborated with law enforcement, intelligence, and defense agencies to develop a critical agent list that focuses on the biological agents that would have the greatest impact on public health. The FBI, the National Institute of Justice, and the Technical Support Working Group are completing a report that lists biological agents that may be more likely to be used by a terrorist group working in the United States that is not sponsored by a foreign government. In addition, an official at USDA’s Animal and Plant Health Inspection Service told us that it uses two lists of agents of concern for a potential bioterrorist attack developed through an international process (although only some of these agents are capable of making both animals and humans sick). According to agency officials, separate threat lists are appropriate because of the different focuses of these agencies. In our view, the existence of competing lists makes the assignment of priorities difficult for state and local officials. Fragmentation has also hindered unity of effort. Officials at the Department of Transportation (DOT) told us that the department has been overlooked in bioterrorism-related planning and policy. DOT officials noted that even though the nation’s transportation centers account for a significant percentage of the nation’s potential terrorist targets, DOT was not part of the founding group of agencies that worked on bioterrorism issues and has not been included in bioterrorism response plans. DOT officials also told us that the department is supposed to deliver supplies for FEMA under the Federal Response Plan, but it was not brought into the planning early enough to understand the extent of its responsibilities in the transportation process. The department learned what its responsibilities would be during TOPOFF 2000. In May 2001, the President asked the Vice President to oversee the development of a coordinated national effort dealing with WMDs. At the same time, the President asked the Director of FEMA to establish an Office of National Preparedness to implement the results of the Vice President’s effort that relate to programs within federal agencies that address consequence management resulting from the use of WMDs. The purpose of this effort is to better focus policies and ensure that programs and activities are fully coordinated in support of building the needed preparedness and response capabilities. In addition, on September 20, 2001, the President announced the creation of the Office of Homeland Security to lead, oversee, and coordinate a comprehensive national strategy to protect the country from terrorism and respond to any attacks that may occur. These actions represent potentially significant steps toward improved coordination of federal activities. In a recent report, we listed a number of important characteristics and responsibilities necessary for a single focal point, such as the proposed Office of Homeland Security, to improve coordination and accountability. Nonprofit research organizations, congressionally chartered advisory panels, government documents, and articles in peer-reviewed literature have identified concerns about the preparedness of states and local areas to respond to a bioterrorist attack. These concerns include insufficient state and local planning for response to terrorist events, inadequacies in the public health infrastructure, a lack of hospital participation in training on terrorism and emergency response planning, insufficient capacity for treating mass casualties from a terrorist act, and questions regarding the timely availability of medical teams and resources in an emergency. Questions exist regarding how effectively federal programs have prepared state and local governments to respond to terrorism. All 50 states and approximately 255 local jurisdictions have received or are scheduled to receive at least some federal assistance, including training and equipment grants, to help them prepare for a terrorist WMD incident. In 1997, FEMA identified planning and equipment for response to nuclear, biological, and chemical incidents as an area in need of significant improvement at the state level. However, an October 2000 report concluded that even those cities receiving federal aid are still not adequately prepared to respond to a bioterrorist attack. Components of the nation’s infectious disease surveillance system are also not well prepared to detect or respond to a bioterrorist attack. Reductions in public health laboratory staffing and training have affected the ability of state and local authorities to identify biological agents. Even the initial West Nile virus outbreak in 1999, which was relatively small and occurred in an area with one of the nation’s largest local public health agencies, taxed the federal, state, and local laboratory resources. Both the New York State and the CDC laboratories were inundated with requests for tests, and the CDC laboratory handled the bulk of the testing because of the limited capacity at the New York laboratories. Officials indicated that the CDC laboratory would have been unable to respond to another outbreak, had one occurred at the same time. In fiscal year 2000, CDC awarded approximately $11 million to 48 states and four major urban health departments to improve and upgrade their surveillance and epidemiological capabilities. Inadequate training and planning for bioterrorism response by hospitals is a major problem. The Gilmore Panel concluded that the level of expertise in recognizing and dealing with a terrorist attack involving a biological or chemical agent is problematic in many hospitals. A recent research report concluded that hospitals need to improve their preparedness for mass casualty incidents. Local officials told us that it has been difficult to get hospitals and medical personnel to participate in local training, planning, and exercises to improve their preparedness. Several federal and local officials reported that there is little excess capacity in the health care system for treating mass casualty patients. Studies have reported that emergency rooms in some areas are routinely filled and unable to accept patients in need of urgent care. According to one local official, the health care system might not be able to handle the aftermath of a disaster because of the problems caused by overcrowding and the lack of excess capacity. Local officials are also concerned about whether the federal government could quickly deliver enough medical teams and resources to help after a biological attack. Agency officials say that federal response teams, such as Disaster Medical Assistance Teams, could be on site within 12 to 24 hours. However, local officials who have deployed with such teams say that the federal assistance probably would not arrive for 24 to 72 hours. Local officials also told us that they were concerned about the time and resources required to prepare and distribute drugs from the National Pharmaceutical Stockpile during an emergency. Partially in response to these concerns, CDC has developed training for state and local officials on using the stockpile and will deploy a small staff with the supplies to assist the local jurisdiction with distribution. We found that federal departments and agencies are participating in a variety of research and preparedness activities that are important steps in improving our readiness. Although federal departments and agencies have engaged in a number of efforts to coordinate these activities on a formal and informal basis, we found that coordination between departments and agencies is fragmented, as illustrated by the many and complex relationships between federal departments and agencies shown in Appendix III. In addition, we found concerns about the preparedness of state and local jurisdictions, including the level of state and local planning for response to terrorist events, inadequacies in the public health infrastructure, a lack of hospital participation in training on terrorism and emergency response planning, capabilities for treating mass casualties, and the timely availability of medical teams and resources in an emergency. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact me at (202) 512-7118. Barbara Chapman, Robert Copeland, Marcia Crosse, Greg Ferrante, Deborah Miller, and Roseanne Price also made key contributions to this statement. We identified the following federal departments and agencies as having responsibilities related to the public health and medical consequences of a bioterrorist attack: USDA – U.S. Department of Agriculture APHIS – Animal and Plant Health Inspection Service ARS – Agricultural Research Service FSIS – Food Safety Inspection Service OCPM – Office of Crisis Planning and Management DOC – Department of Commerce NIST – National Institute of Standards and Technology DOD – Department of Defense DARPA – Defense Advanced Research Projects Agency JTFCS – Joint Task Force for Civil Support National Guard U.S. Army DOE – Department of Energy HHS – Department of Health and Human Services AHRQ – Agency for Healthcare Research and Quality CDC – Centers for Disease Control and Prevention FDA – Food and Drug Administration NIH – National Institutes of Health OEP – Office of Emergency Preparedness DOJ – Department of Justice FBI – Federal Bureau of Investigation OJP – Office of Justice Programs DOT – Department of Transportation USCG – U.S. Coast Guard Treasury – Department of the Treasury USSS – U.S. Secret Service VA – Department of Veterans Affairs EPA – Environmental Protection Agency FEMA – Federal Emergency Management Agency Figure 1, which is based on the framework given in the Terrorism Incident Annex of the Federal Response Plan, shows a sample of the coordination activities by these federal departments and agencies, as they existed prior to the recent creation of the Office of Homeland Security. This figure illustrates the complex relationships among the many federal departments and agencies involved. The following coordination activities are represented on the figure: OMB Oversight of Terrorism Funding. The Office of Management and Budget established a reporting system on the budgeting and expenditure of funds to combat terrorism, with goals to reduce overlap and improve coordination as part of the annual budget cycle. Federal Response Plan – Health and Medical Services Annex. This annex in the Federal Response Plan states that HHS is the primary agency for coordinating federal assistance to supplement state and local resources in response to public health and medical care needs in an emergency, including a bioterrorist attack. Informal Working Group – Equipment Request Review. This group meets as necessary to review equipment requests of state and local jurisdictions to ensure that duplicative funding is not being given for the same activities. Agreement on Tracking Diseases in Animals That Can Be Transmitted to Humans. This group is negotiating an agreement to share information and expertise on tracking diseases that can be transmitted from animals to people and could be used in a bioterrorist attack. National Medical Response Team Caches. These caches form a stockpile of drugs for OEP’s National Medical Response Teams. Domestic Preparedness Program. This program was formed in response to the National Defense Authorization Act of Fiscal Year 1997 (P.L. 104-201) and required DOD to enhance the capability of federal, state, and local emergency responders regarding terrorist incidents involving WMDs and high-yield explosives. As of October 1, 2000, DOD and DOJ share responsibilities under this program. Office of National Preparedness – Consequence Management of WMD Attack. In May 2001, the President asked the Director of FEMA to establish this office to coordinate activities of the listed agencies that address consequence management resulting from the use of WMDs. Food Safety Surveillance Systems. These systems are FoodNet and PulseNet, two surveillance systems for identifying and characterizing contaminated food. National Disaster Medical System. This system, a partnership between federal agencies, state and local governments, and the private sector, is intended to ensure that resources are available to provide medical services following a disaster that overwhelms the local health care resources. Collaborative Funding of Smallpox Research. These agencies conduct research on vaccines for smallpox. National Pharmaceutical Stockpile Program. This program maintains repositories of life-saving pharmaceuticals, antidotes, and medical supplies that can be delivered to the site of a biological (or other) attack. National Response Teams. The teams constitute a national planning, policy, and coordinating body to provide guidance before and assistance during an incident. Interagency Group for Equipment Standards. This group develops and maintains a standardized equipment list of essential items for responding to a terrorist WMD attack. (The complete name of this group is the Interagency Board for Equipment Standardization and Interoperability.) Force Packages Response Team. This is a grouping of military units that are designated to respond to an incident. Cooperative Work on Rapid Detection of Biological Agents in Animals, Plants, and Food. This cooperative group is developing a system to improve on-site rapid detection of biological agents in animals, plants, and food. Bioterrorism: Federal Research and Preparedness Activities (GAO-01-915, Sept. 28, 2001). Combating Terrorism: Selected Challenges and Related Recommendations (GAO-01-822, Sept. 20, 2001). Combating Terrorism: Comments on H.R. 525 to Create a President’s Council on Domestic Terrorism Preparedness (GAO-01-555T, May 9, 2001). Combating Terrorism: Accountability Over Medical Supplies Needs Further Improvement (GAO-01-666T, May 1, 2001). Combating Terrorism: Observations on Options to Improve the Federal Response (GAO-01-660T, Apr. 24, 2001). Combating Terrorism: Accountability Over Medical Supplies Needs Further Improvement (GAO-01-463, Mar. 30, 2001). Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy (GAO-01-556T, Mar. 27, 2001). Combating Terrorism: FEMA Continues to Make Progress in Coordinating Preparedness and Response (GAO-01-15, Mar. 20, 2001). Combating Terrorism: Federal Response Teams Provide Varied Capabilities; Opportunities Remain to Improve Coordination (GAO-01-14, Nov. 30, 2000). West Nile Virus Outbreak: Lessons for Public Health Preparedness (GAO/HEHS-00-180, Sept. 11, 2000). Combating Terrorism: Linking Threats to Strategies and Resources (GAO/T-NSIAD-00-218, July 26, 2000). Chemical and Biological Defense: Observations on Nonmedical Chemical and Biological R&D Programs (GAO/T-NSIAD-00-130, Mar. 22, 2000). Combating Terrorism: Need to Eliminate Duplicate Federal Weapons of Mass Destruction Training (GAO/NSIAD-00-64, Mar. 21, 2000). Combating Terrorism: Chemical and Biological Medical Supplies Are Poorly Managed (GAO/T-HEHS/AIMD-00-59, Mar. 8, 2000).
This testimony discusses on the efforts of federal agencies to prepare for the consequences of a bioterrorist attack. GAO found that federal agencies are participating in research and preparedness activities, from improving the detection of biological agents to developing a national stockpile of pharmaceuticals to treat victims of disasters. Federal agencies also have several efforts underway to coordinate these activities on a formal and informal basis, such as interagency work groups. Despite these efforts however, coordination between agencies remains fragmented. GAO also found emerging concerns about the preparedness of state and local jurisdictions, including insufficient state and local planning for response to terrorist events, inadequate public health infrastructure, a lack of hospital participation in training on terrorism and emergency response planning, insufficient capabilities for treating mass casualties, and the timely availability of medical teams and resources in an emergency. This testimony summarizes a September 2001 report (GAO-01-915).
In 1990, the Congress enacted the Global Change Research Act. This act, among other things, required the administration to (1) prepare and at least every 3 years revise and submit to the Congress a national global change research plan, including an estimate of federal funding for global change research activities to be conducted under the plan; (2) in each annual budget submission to the Congress, identify the items in each agency’s budget that are elements of the United States Global Change Research Program (USGCRP), an interagency long-term climate change science research program; and (3) report annually on climate change “expenditures required” for the USGCRP. In 1992, the United States signed and ratified the United Nations Framework Convention on Climate Change, which was intended to stabilize the buildup of greenhouse gases in the earth’s atmosphere, but did not impose binding limits on emissions. In response to the requirements of the 1990 act, the administration reported annually from 1990 through 2004 on funding for climate change science in reports titled Our Changing Planet. From 1990 through 2001, the reports presented detailed science funding data for the USGCRP. Federal climate change science programs were reorganized in 2001 and 2002. In 2001, the Climate Change Research Initiative (CCRI) was created to coordinate short-term climate change research focused on reducing uncertainty, and in 2002, CCSP was created to coordinate and integrate USGCRP and CCRI activities. CCSP is a collaborative interagency program designed to improve the government wide management of climate science and research. Since 2002, CCSP has been responsible for meeting the reporting requirement and has published the Our Changing Planet reports. The most recent report in this series was published in November 2005. The Climate Change Technology Program (CCTP) is a multiagency technology research and development coordinating structure similar to CCSP. Its overall goal is to attain, on a global scale and in partnership with other entities, a technological capability that can provide abundant, clean, secure, and affordable energy and related services needed to encourage and sustain economic growth, while achieving substantial reductions in emissions of greenhouse gases and mitigating the risks of potential climate change. In March 1998, OMB, in response to a congressional requirement for a detailed account of climate change expenditures and obligations, issued a brief report summarizing federal agency programs related to global climate change. OMB produced another climate change expenditures report in March 1999 and, in response to a request at a 1999 hearing, OMB provided climate change funding data for 1993 through 1998 for the hearing record. Each year since 1999, the Congress has included a provision in annual appropriations laws requiring OMB to report in detail all federal agency obligations and expenditures, domestic and international, for climate change programs and activities. As a result of these reporting requirements, OMB annually publishes the Federal Climate Change Expenditures Report to Congress, which presents federal climate change funding for the technology, science, and international assistance categories, and tax expenditures. The climate change activities and associated costs presented in OMB reports must be identified by line item as presented in the President’s budget appendix. OMB has interpreted this to mean that the data in the reports must be shown by budget account. For the last 3 years for which we reviewed data, the Congress had required that the administration produce reports for climate change expenditures and obligations for the current fiscal year within 45 days after the submission of the President’s budget request for the upcoming fiscal year. OMB’s most recent report was released in April 2006. OMB reports include a wide range of federal climate-related programs and activities. Some activities, like scientific research on global environmental change by USGCRP, are explicitly climate change programs, whereas others, such as many technology initiatives, are not solely for climate change purposes. For example, OMB reports included some programs that were started after the United States ratified the Framework Convention in 1992 and were specifically designed to encourage businesses and others to reduce their greenhouse gas emissions, for example, by installing more efficient lighting. OMB reports also included programs that were expanded or initiated in the wake of the 1973 oil embargo to support such activities as energy conservation (to use energy more efficiently), renewable energy (to substitute for fossil fuels), and fossil energy (to make more efficient use of fossil fuels), all of which can help to reduce greenhouse gas emissions, but were not initially developed as climate change programs. Federal climate change funding, as reported by OMB, increased from $2.35 billion in 1993 to $5.09 billion in 2004 (116 percent), or from $3.28 billion to $5.09 billion (55 percent) after adjusting for inflation. Funding also increased for technology, science, and international assistance between 1993 and 2004, as shown in table 1. However, changes in reporting methods have limited the comparability of funding data over time; therefore it is unclear whether funding increased as much as reported by OMB. OMB did not report estimates for existing climate-related tax expenditures during this period, although climate-related tax expenditures amounted to hundreds of millions of dollars in revenue forgone by the federal government in fiscal year 2004. OMB officials told us that changes in reporting methods were due to such reasons as the short amount of time available to prepare the report, the fact that the reporting requirement is not permanent law, but appears each year in their appropriations legislation, and changes in administration policy and priorities. As a result of our recommendations, however, OMB made changes in its report on climate change funding for fiscal year 2007, which was published in April 2006. For example, OMB more clearly labeled data throughout the report and added information on existing tax provisions that can contribute to reducing greenhouse gas emissions. From 1993 through 2004, technology funding increased as a share of total federal climate funding from 36 percent to 56 percent, as reported by OMB. Over this period, technology funding increased from $845 million to $2.87 billion (239 percent), or adjusted for inflation, from $1.18 billion to $2.87 billion (143 percent). For example, funding for energy conservation increased from $346 million to $868 million, and funding for renewable energy increased from $249 million to $352 million. Table 2 presents funding data for selected years for the seven largest accounts, which accounted for 92 percent of technology funding in 2004. We identified three ways that the data on technology funding presented in three of OMB’s recent reports may not be comparable to the data presented in previous reports. First, OMB added accounts that were not previously presented. For example, OMB reported that NASA had $152 million in funding for technology-related activities, which included research to reduce emissions associated with aircraft operations in 2003. OMB did not report this account in the technology category in 2002. In addition, OMB included and removed some accounts, without explanation, from reports in years other than 2003. For example, OMB reported combined funding of $195 million in 1999, and $200 million in 2000, for bio- based products and bio-energy at the Departments of Energy and of Agriculture. No funding for these accounts was reported from 1993 through 1998 or from 2001 through 2004. In each of these cases, OMB did not explain whether the new accounts reflected the creation of new programs, a decision to count an existing program for the first time, or a decision to re-classify funding from different categories as technology funding. According to OMB officials, these changes in report structure and content for technology funding, as well as similar changes in science and international assistance funding, were the result of time constraints and other factors. They told us that the short timeline required by the Congress for completing the report (within 45 days of submitting the upcoming year’s budget) limited OMB’s ability to analyze data submitted by agencies. They said that they must rely on funding estimates quickly developed by agencies in order to produce the report within the specified timeframe, and that the reports are often compilations of agency activities and programs, some of which may or may not have been presented separately in prior years. Moreover, these officials told us that the presentation of data has changed over time for a variety of reasons other than short time limits, including changes in administration priorities and policy, changes in congressional direction, changes to budget and account structures, and attempts to more accurately reflect the reporting requirement as specified in the annual appropriations language. The officials also stated that in each report they ensured consistency for the 3 years covered (prior year, current year, and budget year). Furthermore, OMB officials told us that the presentation of new accounts in the technology category, as well as the international assistance category, was due to the establishment of new programs and the inclusion of existing programs. They told us that the account-by-account display in the reports has been changed over time as the CCSP and the Climate Change Technology Program (CCTP), a multiagency technology research and development coordinating structure similar to the CCSP, have become better defined. Second, OMB reported that it expanded the definitions of some accounts to include more activities but did not specify how the definitions were changed. We found that over 50 percent of the increase in technology funding from 2002 to 2003 was due to increases in two existing DOE accounts: nuclear energy supply and science (fusion, sequestration, and hydrogen). OMB reported funding of $32 million in 2002 and $257 million in 2003, for the nuclear energy supply account and reported funding of $35 million in 2002, and $298 million in 2003, for the science (fusion, sequestration, and hydrogen) account. Although OMB stated in its May 2004 report that 2003 funding data included more activities within certain accounts, including the research and development of nuclear and fusion energy, the report was unclear about whether the funding increases for these two existing accounts were due to the addition of more programs to the accounts or increased funding for existing programs already counted in the accounts. Finally, if new programs were counted in these accounts, OMB did not specify what programs were added and why. OMB officials told us that the definitions of some accounts were changed to include more nuclear programs because, while the prior administration did not consider nuclear programs to be part of its activities relating to climate change, the current administration does consider them to be a key part of the CCTP. Third, OMB did not maintain the distinction that it had made in previous reports between funding for programs whose primary focus is climate change and programs where climate change is not the primary focus. As a result, certain accounts in the technology category were consolidated into larger accounts. From 1993 through 2001, OMB presented funding data as directly or indirectly related to climate change. The former programs are those for which climate change is a primary purpose, such as renewable energy research and development. The latter are programs that have another primary purpose, but which also support climate change goals. For example, grants to help low-income people weatherize their dwellings are intended primarily to reduce heating costs, but may also help reduce the consumption of fossil fuels. OMB did not maintain the distinction between the two kinds of programs for 2002, 2003, and 2004 funding data. For example, OMB presented energy conservation funding of $810 million in 2001, including $619 million in direct research and development funding, and $191 million in indirect funding for weatherization and state energy grants. In contrast, 2002 funding data presented by OMB reflected energy conservation funding of $897 million, including $622 million in research and development, $230 million for weatherization, and $45 million for state energy grants, but did not distinguish between direct and indirect funding. OMB presented energy conservation funding of $880 million in 2003 and $868 million in 2004 as single accounts without any additional detail. OMB officials stated that they had adopted a different approach to reporting climate change funding to reflect the new program structures as the CCSP and CCTP were being established. They stated that the result was, in some cases, an aggregation of activities that may have previously been reported on separate accounts. According to the officials, the 2003 and 2004 data more accurately reflect the range of climate change-related programs as they are now organized. OMB included a crosswalk in its May 2004 report that showed 2003 funding levels as they would have been presented using the methodology of previous reports. While the crosswalk identified funding for accounts that were presented in previous reports, it did not identify new funding reported by OMB or specify whether such funding was the result of counting new programs, a decision to start counting existing programs as climate change-related, or shifts between categories. OMB officials told us that the reporting methodology has changed since the initial reports and that it may be difficult to resolve the differences because of changes in budget and account structure. Finally, they noted that each report has been prepared in response to a one-time requirement and that there has been no requirement for a consistent reporting format from one year to the next or for explaining differences in methodology from one report to another. However, in its fiscal year 2007 report to the Congress, OMB responded to our recommendations by labeling the data more clearly and reporting changes were footnoted. According to both OMB and CCSP, the share of total climate change funding devoted to science decreased from 56 percent in 1993 to 39 percent in 2004, even though science funding increased from $1.31 billion to $1.98 billion (51 percent), or from $1.82 billion to $1.98 billion (9 percent) after adjusting for inflation. For example, according to OMB, funding for NASA on activities such as the satellite measurement of atmospheric ozone concentrations increased from $888 million to $1.26 billion. OMB reported new science funding for 2003 and 2004 to reflect the creation of CCRI. Funding for CCRI increased from $41 million in 2003, the first year funding for CCRI was presented, to $173 million in 2004, and included funding by most of the agencies presented in table 3. We present funding for CCRI as a separate program to illustrate the new organization’s role in increasing reported climate change funding. Table 3 presents funding as reported by OMB for the eight largest agencies and programs in the science category, which accounted for 99 percent of the science total for 2004. Science funding data from 1993 through 2004, as reported by OMB and CCSP, were generally comparable, although there were more discrepancies in earlier years than in later years. Science funding totals reported by CCSP from 1993 through 1997 were within 3 percent of the OMB totals for all years except 1996 and 1997. Science funding totals reported by CCSP in 1996 and 1997 were $156 million (9 percent) and $162 million (10 percent) higher than those reported by OMB. Over 90 percent of the difference for those years occurred because CCSP reported greater funding for NASA than OMB reported. CCSP stated in its fiscal year 1998 report that it increased its 1996 and 1997 budget figures to reflect the reclassification of certain programs and activities in some agencies that were not previously included in the science funding total. Total science funding reported by OMB and CCSP from 1998 through 2004 was identical for 4 of the 7 years. The largest difference for the 3 years that were not identical was $8 million in 2001, which represented less than 1 percent of the science funding total reported by OMB for that year. The other differences in total science funding were $3 million in 2002, and $1 million in 1999, and each represented less than 1 percent of the OMB science total for those years. Science funding by agency, as presented by OMB and CCSP from 1993 through 1997, differed in many cases, with the exception of funding for the National Science Foundation (NSF), which was nearly identical over that time period. For example, CCSP reported $143 million more funding for NASA in 1996 than OMB reported, and OMB reported $24.9 million more funding for DOE in 1994 than CCSP reported. The greatest dollar difference related to NASA’s funding in 1997. Whereas OMB reported funding of $1.22 billion, CCSP reported funding of $1.37 billion—$151 million, or 12 percent more than the OMB amount. The greatest percentage difference related to the Department of the Interior’s funding in 1993. Whereas OMB reported funding of $22 million, CCSP reported funding of $37.7 million—$15.7 million, or 71 percent more than reported by OMB. Further, from 1993 through 1997, OMB did not report science funding by some agencies that were reported by CCSP. For example, CCSP reported that DOD’s funding ranged from $5.7 million to $6.6 million from 1993 through 1995, and that the Tennessee Valley Authority received funding of $1 million or less per year from 1993 through 1997, but OMB did not report any such funding. OMB officials told us that data used for the 1993 to 1997 science funding comparison with CCSP were collected too long ago to be able to identify the differences. However, they stated that the data from early years were produced in a very short period for use in testimony or questions for the record. According to OMB, this quick turnaround did not allow time for a thorough consistency check with other data sources. From 1998 through 2004, OMB and CCSP data on funding by agency were nearly identical. Both OMB and CCSP reported science funding for nine agencies over the entire 7-year period, for a total of 63 agency funding amounts. Of these, 52, or 83 percent, matched exactly. Of the 11 differences, there was one difference of $8 million, one of $2 million, and nine of $1 million or less. The greatest difference from 1998 through 2004 was $8 million in funding for the Department of Commerce in 2001, which was 9 percent of the Department of Commerce total, or less than 1 percent of total science funding as reported by OMB for that year. The director of CCSP told us that changes to reports, such as the creation and deletion of different categorization methods, were made because CCSP is changing towards a goals-oriented budget, and that categorization methods changed as the program evolved. The director also said that future reports will explicitly present budget data as they were reported in prior reports to retain continuity, even if new methods are introduced. Another CCSP official told us that CCSP now works with OMB to ensure that consistent funding information is presented in Our Changing Planet reports and OMB reports, and that, beginning with the fiscal year 2006 report (which was published in late 2005), CCSP would attempt to explain when and why changes are made to reporting methods. In its 2006 fiscal year report, CCSP did explain changes to its reporting. From 1993 through 2004, international assistance funding decreased from 9 percent to 5 percent of total federal funding on climate change, as reported by OMB. Over the same time period, international assistance funding increased from $201 million to $252 million (an increase of 25 percent), but after adjusting for inflation, decreased from $280 million to $252 million (a decrease of 10 percent). For example, reported funding for the Department of the Treasury to help developing countries invest in energy efficiency, renewable energy, and the development of clean energy technologies, such as fuel cells, increased from zero in 1993 to $32 million in 2004. Table 4 presents funding as reported by OMB for the three largest accounts in the international assistance category. International assistance funding reported by OMB was generally comparable over time, although some new accounts were added without explanation. In its reports, OMB did not provide an explanation of whether such new accounts reflected the creation of new programs or a decision to count existing programs as climate change-related for the first time. OMB officials told us that the presentation of new accounts in the international assistance category was due to the establishment of new programs and the inclusion of existing programs. They told us that the account-by-account display in the reports has been changed over time as climate change programs have become better defined. Although not required to provide information on tax expenditures related to climate change, OMB reported certain information related to climate- related tax expenditures for each year. Specifically, it listed proposed climate-related tax expenditures appearing in the President’s budget, but it did not report revenue loss estimates for existing climate-related tax expenditures from 1993 through 2004. Based on the Department of the Treasury’s tax expenditure list published in the 2006 budget, we identified four existing tax expenditures that have purposes similar to programs reported by OMB in its climate change reports. In 2004, estimated revenue losses amounted to hundreds of millions of dollars for the following tax expenditures: $330 million in revenue losses was estimated for new technology tax credits to reduce the cost of generating electricity from renewable resources. A credit of 10 percent was available for investment in solar and geothermal energy facilities. In addition, a credit of 1.5 cents was available per kilowatt hour of electricity produced from renewable resources such as biomass, poultry waste, and wind facilities. $100 million in revenue losses was estimated for excluded interest on energy facility bonds to reduce the cost of investing in certain hydroelectric and solid waste disposal facilities. The interest earned on state and local bonds used to finance the construction of certain hydroelectric generating facilities was tax exempt. Some solid waste disposal facilities that produced electricity also qualified for this exemption. $100 million in revenue losses was estimated for excluded income from conservation subsidies provided by public utilities to reduce the cost of purchasing energy-efficient technologies. Residential utility customers could exclude from their taxable income energy conservation subsidies provided by public utilities. Customers could exclude subsidies used for installing or modifying certain equipment that reduced energy consumption or improved the management of energy demand. $70 million in revenue losses was estimated for tax incentives for the purchase of clean fueled vehicles to reduce automobile emissions. A tax credit of 10 percent, not to exceed $4,000, was available to purchasers of electric vehicles. Purchasers of vehicles powered by compressed natural gas, hydrogen, alcohol, and other clean fuels could deduct up to $50,000 of the vehicle purchase costs from their taxable income, depending upon the weight and cost of the vehicle. Similarly, owners of refueling properties could deduct up to $100,000 for the purchase of re-fueling equipment for clean fueled vehicles. OMB officials said that they consistently reported proposed tax expenditures where a key purpose was specifically to reduce greenhouse gas emissions. They also stated that they did not include existing tax expenditures that may have greenhouse gas benefits but were enacted for other purposes, and that the Congress had provided no guidance to suggest additional tax expenditure data should be included in the annual reports. OMB’s decision criteria for determining which tax expenditures to include differed in two key respects from its criteria for determining which accounts to include. First, OMB presented funding for existing as well as proposed accounts, but presented information only on proposed, but not existing, tax expenditures. Second, OMB presented funding for programs where a key purpose was specifically to reduce greenhouse gas emissions, as well as for programs that may have greenhouse gas benefits but were enacted for other purposes. However, OMB presented information only on proposed tax expenditures where a key purpose was specifically to reduce greenhouse gas emissions. In response to GAO’s recommendation to report existing climate-related tax expenditures, OMB’s fiscal year 2007 report to the Congress includes existing tax expenditures that contribute to reducing global warming. OMB reported that 12 of the 14 agencies that received funding for climate change programs in 2004 received more funding in that year than they had in 1993. However, it is unclear whether funding changed as much as reported by OMB because unexplained modifications in the reports’ contents limit the comparability of agencies’ funding data. From 1993 through 2004, climate change funding for DOE increased more than any other agency, from $963 million to $2.52 billion, for an increase of $1.56 billion (162 percent). Adjusted for inflation, such funding increased from $1.34 billion to $2.52 billion, for an increase of $1.18 billion (88 percent). The second largest increase in agency funding was for NASA, which received a $660 million (74 percent) increase in funding over the same time period. NASA’s funding increased $310 million (25 percent) over this period after adjusting for inflation. The funding increases for these two agencies accounted for 81 percent of the reported total increase in federal climate change funding from 1993 through 2004. Conversely, USAID experienced the largest decrease in funding—from $200 million in 1993 to $195 million in 2004 (3 percent), or, in inflation-adjusted terms, from $279 million to $195 million (30 percent). Table 5 shows OMB’s reports on climate change funding by agency for selected years. Unexplained changes in the content of OMB reports make it difficult to determine whether funding changed as much as was reported by OMB. Because agency funding totals are composed of individual accounts, the changes in the reports’ contents discussed earlier, such as the unexplained addition of accounts to the technology category, limit the comparability of agencies’ funding data over time. For example, OMB reported Army, Navy, Air Force, and Defense-wide funding totaling $83 million in 2003, and $51 million in 2004, in accounts titled Research, Development, Test, and Evaluation, but did not report these accounts for prior years. OMB did not explain whether these accounts reflected the creation of new programs or a decision to count existing programs for the first time. OMB officials told us that agencies can be included in reports for the first time when new initiatives or programs are started, such as the CCTP. In some cases, those initiatives or programs are made up of entirely new funding but in other cases they may be additions on top of a small amount of base funding. These officials told us that agencies sometimes include data that were not previously reported when they requested funding for those initiatives, but they assured us that the data are reported consistently for the 3 years presented in each report. The federal budget process is complex, and there are numerous steps that culminate in the outlay of federal funds. Among the key steps in this process are the following, as defined by OMB: Budget authority means the authority provided in law to incur financial obligations that will result in outlays. Obligations are binding agreements that will result in outlays, immediately or in the future. Expenditures are payments to liquidate an obligation. The Congress, in the Congressional Budget and Impoundment Control Act of 1974, as amended, has defined outlays as being the expenditures and net lending of funds under budget authority. In simplified terms, budget authority precedes obligations, which precede outlays in the process of spending federal funds. As noted above, since 1999, the Congress has required the President to submit a report each year to the Senate and House Committees on Appropriations describing in detail all federal agency obligations and expenditures, domestic and international, for climate change programs and activities. In response, OMB had annually published the Federal Climate Change Expenditures Report to Congress which presented budget authority information in summary data tables instead of obligations and expenditures, as the title of the report and the table titles suggested. The only indication that the table presented budget authority information, rather than expenditures, was a parenthetical statement to that effect in a significantly smaller font. OMB officials told us that the term “expenditures” was used in the report title and text because that was the term used most often in the legislative language. They also said that the reports presented data in terms of budget authority because OMB hads always interpreted the bill and report language to request the budget authority levels for each activity in a particular year. They stated further that, from a technical budget standpoint, expenditures are usually synonymous with outlays, and that one way to think of budget authority is that it is the level of expenditures (over a period of 1 or more years) that is made available in a particular appropriations bill. OMB viewed this as an appropriate interpretation of the congressional requirements since the committees on appropriations work with budget authority and not outlays. Moreover, OMB told us that these committees had never objected to its interpretation of “obligations and expenditures” as budget authority and that OMB had always identified the data provided in the table as budget authority. In our August 2005 report, we expressed several concerns with OMB’s approach. First, OMB’s approach of reporting budget authority did not comply with the language of the annual legal requirements to report on climate change “obligations and expenditures.” Second, in reviewing the legislative history of these reporting requirements, we found no support for OMB’s interpretation that when the Congress called for “obligations and expenditures” information, it actually meant “budget authority” information. Third, OMB’s interpretation was not consistent with its own Circular A-11, which defines budget authority as stated above, not actual obligations and expenditures. Nonetheless, we recognize that it is not possible for OMB to meet the most recent reporting requirements because it must provide a report on climate change obligations and expenditures for the current fiscal year within 45 days of submitting the President’s budget for the following fiscal year (which must be submitted the first Monday of February). For example, the President submitted the fiscal year 2006 budget on February 7, 2005, so OMB’s report on fiscal year 2005 climate change expenditures and obligations had to be submitted in March 2005—approximately halfway through the 2005 fiscal year. However, complete expenditures data are available only after the end of each fiscal year. Thus, OMB could not meet both the timing requirement and report all actual expenditures and obligations in fiscal year 2005. CCSP has also reported budget authority data in its Our Changing Planet reports. As noted above, CCSP, or its predecessor organization, initially was required to report annually on certain climate change “amounts spent,” “amounts expected to be spent,” and “amounts requested,” but this reporting requirement was terminated in 2000. Currently, CCSP is responsible for reporting information relating to the federal budget and federal funding for climate change science, not climate change expenditure information. Since 2000, CCSP has fulfilled these reporting requirements by providing budget authority information in its Our Changing Planet reports. In conclusion, we found that the lack of clarity in OMB’s and CCSP’s reports made it difficult to comprehensively understand the federal government’s climate change expenditures. A better understanding of these expenditures is needed before it is possible to assess CCSP’s and other federal agencies’ progress towards their climate change goals. We therefore made seven recommendations to OMB and three to CCSP to clarify how they present climate change funding information. OMB agreed with most of our recommendations and has also implemented several of them. CCSP agreed with all of our recommendations and has implemented our recommendation about explaining changes in report content or format. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any question you or other Members of the Committee may have. For further information regarding this testimony, please contact me at (202) 512-3841. John Healey, Anne K. Johnson, and Vincent P. Price made key contributions to this testimony. Richard Johnson, Carol Kolarik, Carol Herrnstadt Shulman, and Anne Stevens also made important contributions. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Congress has required annual reports on federal climate change spending. The Office of Management and Budget (OMB) reports funding for: technology (to reduce greenhouse gas emissions), science (to better understand the climate), international assistance (to help developing countries), and tax expenditures (to encourage emissions reduction). The Climate Change Science Program (CCSP), which coordinates many agencies' activities, also reports on science funding. This testimony is based on GAO's August 2005 report Climate Change: Federal Reports on Climate Change Should Be Clearer and More Complete (GAO-05-461). GAO examined federal climate change funding for 1993 through 2004, including (1) how total funding and funding by category changed and whether funding data are comparable over time and (2) how funding by individual agencies changed and whether funding data are comparable over time. According to OMB, from 1993 to 2004, federal funding for climate change increased from $3.3 billion to $5.1 billion (55 percent) after adjusting for inflation. During this period, reported inflation-adjusted funding increased for technology and science, but decreased for international assistance. However, it is unclear whether funding changed as much as reported because changes in the format and content of OMB and CCSP reports make it difficult to compare funding data over time. For example, over time, OMB expanded the definitions of some accounts to include more activities, but did not specify how it changed the definitions. OMB officials stated that it is not required to follow a consistent reporting format from year to year. Further, CCSP's science funding reports were difficult to compare over time because CCSP introduced new methods for categorizing funding without explaining how they related to previous methods. The Director of CCSP said that its reports changed as the program evolved. These and other limitations make it difficult to determine actual changes in climate change funding. Similarly, OMB reported that 12 of the 14 agencies that funded climate change programs in 2004 increased such funding between 1993 and 2004, but unexplained changes in the reports' contents limit the comparability of data on funding by agency. For example, reported funding for the Department of Energy (DOE), the agency with the most reported climate-related funding in 2004, increased from $1.34 billion to $2.52 billion (88 percent) after adjusting for inflation. DOE and the National Aeronautics and Space Administration accounted for 81 percent of the reported increase in funding from 1993 through 2004. However, because agency funding totals are composed of individual accounts, changes in the reports' contents, such as the unexplained addition of accounts to the technology category, make it difficult to compare agencies' funding data over time and, therefore, to determine if this is a real or a definitional increase. Furthermore, GAO found that OMB reported funding for certain agencies in some years but not in others, without explanation. OMB told GAO that it relied on agency budget offices to submit accurate data. These data and reporting limitations make determining agencies' actual levels of climate change funding difficult.
We found that the agencies responsible for rebuilding Iraq generally complied with applicable requirements governing competition when awarding new reconstruction contracts in fiscal year 2003. While the Competition in Contracting Act of 1984 requires that federal contracts be awarded on the basis of full and open competition, the law and implementing regulations recognize that there may be circumstances under which full and open competition would be impracticable, such as when contracts need to be awarded quickly to respond to unforeseen and urgent needs or when there is only one source for the required product or service. In such cases, agencies are given authority by law to award contracts under limited competition or on a sole-source basis, provided that the proposed actions are appropriately justified and approved. We reviewed 14 new contracts that were awarded in fiscal year 2003 using other than full and open competition: a total of 5 sole-source contracts awarded by the Army Corps of Engineers, the Army Field Support Command, and USAID; and 9 limited competition contracts awarded by the Department of State, the Army Contracting Agency, and USAID. For 13 of these new contracts, agency officials adequately justified their decisions and complied with the statutory and regulatory competition requirements. For example, USAID officials awarded seven contracts under limited competition and two sole-source contracts citing an exception to the competition requirements that was provided for under the Federal Property and Administrative Services Act. USIAD concluded that the use of standard competitive procedures would not enable it to put in place foreign aid programs and activities for Iraq in a timely manner. We found that USAID’s justification and approval documentation supporting the award of these contracts complied with applicable requirements. As I will shortly discuss in more detail, we also found that the Army Corps of Engineers properly justified the award of a sole-source contract to restore Iraq’s oil infrastructure. In one case, however, the Department of State justified and approved the use of limited competition under a unique authority that, in our opinion, may not be a recognized exception to the competition requirements. At the same time, State took steps to obtain some competition by inviting offers from four firms. In addition, it is likely that State could have justified and approved its limited competition under recognized exceptions to the competition requirements. With respect to issuing a task order under an existing contract, the competition law does not require competition beyond that obtained for the initial contract award, provided the task order does not increase the scope of the work, period of performance, or maximum value of the contract under which the order is issued. The scope, period, or maximum value may be increased only by modification of the contract, and competitive procedures are required to be used for any such increase unless an authorized exception applies. As we noted in our report released yesterday, determining whether work is within the scope of an existing task order contract is primarily an issue of contract interpretation and judgment by the contracting officer. We found several compliance problems when agencies issued task orders under existing contracts. Specifically, of the 11 task orders we reviewed, 7 were, in whole or part, not within scope. For example, the Defense Contracting Command-Washington (DCC-W) improperly used a General Services Administration (GSA) schedule contract to issue two task orders to the Science Applications International Corporation with a combined value of over $107 million for work that was outside the scope of the schedule contract. One order involved developing a news media capability—including radio and television programming and broadcasting—in Iraq. The other required the contractor to recruit people identified by DOD as subject matter experts, enter into subcontracts with them, and provide them with travel and logistical support within the United States and Iraq. The GSA schedule contract, however, was for management, organizational, and business improvement services for federal agencies. In our view, the statements of work for both task orders were outside the scope of the schedule contract. Another example of an agency issuing a task order that was outside the scope of the underlying contract involved the Army Field Support Command’s $1.9 million task order for contingency planning for the Iraqi oil infrastructure mission under the LOGCAP contract with Kellogg Brown & Root. This task order, issued in November 2002, required the contractor to develop a plan to repair and restore Iraq’s oil infrastructure should Iraqi forces damage or destroy it. Because the contractor was knowledgeable about the U.S. Central Command’s planning for conducting military operations, DOD officials determined that the contractor was uniquely positioned to develop the contingency support plan. DOD also determined that developing the contingency plan was within the scope of the overall LOGCAP contract. We have concluded, however, that preparation of the contingency support plan for this specific mission (i.e. restoring Iraq’s oil infrastructure) was beyond the scope of the contract. Specifically, we read the LOGCAP statement of work as providing for contingency planning only when the execution of the mission involved is within the scope of the contract. In this regard, all parties—including GAO and DOD—agree that repairing Iraq’s oil infrastructure would not have been within the scope of the LOGCAP contract. Consequently, we concluded that planning the oil infrastructure restoration was also not within the scope of the contract. The Army Field Support Command should have prepared a written justification to authorize the work without competition. In light of the exigent circumstances, such a justification was likely possible but needed to be made and documented to comply with the law and protect the taxpayer’s interests. DOD planners believed early on that issuance of this task order would result in Kellogg Brown & Root being uniquely qualified to initially execute the plan for restoring the Iraqi oil infrastructure, the so-called “RIO contract.” Subsequently, the RIO contract was awarded in March 2003 to Kellogg Brown & Root. The contracting officer’s written justification for the sole-source contract outlined the rationale for the decision. The justification was approved by the Army’s senior procurement executive, as required. We reviewed the justification and approval documentation and determined that it generally complied with applicable legal standards. We made several recommendations to the Secretary of the Army to review out-of-scope task orders to address outstanding issues and take appropriate actions, as necessary. DOD generally concurred with the recommendations and noted that it was in the process of taking corrective actions. DOD also agreed with our recommendation that the Secretary of Defense evaluate the lessons learned in Iraq and develop a strategy for assuring that adequate acquisition staff and other resources can be made available in a timely manner. I will now turn to discussing our ongoing work on DOD’s use of global logistics support contracts. As I previously noted, we looked at four such contracts, which have been used by all the military services to provide a wide array of services, including operating dining facilities and providing housing, in more than half a dozen countries, including Iraq, Kuwait, and Afghanistan. In total, the estimated value of the work under the current contracts is $12 billion, including $5.6 billion for work in Iraq through May 2004. Before summarizing our preliminary findings, let me first make an overall observation about the vital services that these types of contracts provide. The contractors and the military services have, for the most part, worked together to meet military commanders’ needs, sometimes in very hazardous or difficult circumstances. For example, the LOGCAP contract is providing life and logistics support to more than 165,000 soldiers and civilians under difficult security circumstances in Iraq, Afghanistan, Kuwait, and Djibouti, and customers told us they are generally pleased with the service the contractor is providing. The AFCAP contractor is providing air traffic management at air bases throughout central Asia, supplementing scarce Air Force assets and providing needed rest for Air Force service members who also perform this function. Using the CONCAP contract, the Navy has constructed detainee facilities (including a maximum security prison) at Guantanamo Bay on time and within budget. Projects at Guantanamo have increased the safety of both the detainees and the U.S. forces guarding them and resulted in real savings in reduced personnel tempo. Finally, the BSC continues to provide a myriad of high quality services to troops in Kosovo and Bosnia, and the customer works with the contractor to identify costs savings. Within this overall context, we found mixed results in each of the four areas we reviewed—planning, oversight, efficiency and personnel—with variations occurring among the four contracts and among the various commands using them. Our report, which will be issued later this year, will make a number of recommendations to address the shortcomings we identified in these areas. In assessing DOD’s planning, we found that some customers planned quite well for the use of the contracts, following service guidance and including the contractor early in planning. For example, in planning for Operation Iraqi Freedom, U.S. Army, Europe, was tasked with supporting the anticipated movement of troops through Turkey into Iraq, and our review of that planning showed that the command followed applicable Army guidance to good effect. In October 2002, the command brought contractor personnel to its headquarters in Europe to help plan and develop the statement of work. According to a briefing provided by U.S. Army, Europe, contractor planners brought considerable knowledge of contractor capabilities, limitations, and operations, and their involvement early in the planning efforts increased understanding of the requirements and capabilities, facilitated communication regarding the statement of work, and enhanced mission completion. Conversely, we found that the use of LOGCAP in Kuwait and Iraq was not adequately planned, nor was it planned in accordance with applicable Army guidance. Given the lack of early and adequate planning and contractor involvement, two key ingredients needed to maximize LOGCAP support and minimize cost—a comprehensive statement of work and early contractor involvement—were missing. Specifically: A plan to support the troops in Iraq was developed in May 2003, but was not comprehensive because the contractor was not involved in the early planning and it did not include all of the dining facilities, troop housing, and other services that the Army has since added to the task order. According to an official from the 101st Airborne Division, there was a lack of detailed planning for the use of LOGCAP at the theater and division levels for the sustainment phase of the operation. He added that Army planners should develop a closer working relationship with the divisions and the contractor. Task orders were frequently revised. These revisions generated a significant amount of rework for the contractor and the contracting officers. Additionally, time spent reviewing revisions to the task orders is time that is not available for other oversight activities. While operational considerations may have driven some of these changes, we believe others were more likely to have resulted from ineffective planning. For example, the task order supporting the troops in Iraq was revised 7 times in less than 1 year. Frequent revisions have not been limited to this task order. Task order 27, which provides support to U.S troops in Kuwait (estimated value of $426 million as of May 2004), was changed 18 times between September 2002 and December 2003, including 5 changes in one month, some on consecutive days. As of May 11, 2004, the contracting office, DCMA, and the contractor had processed more than 176 modifications to LOGCAP task orders. In some cases, we found that contract oversight processes were in place and functioning well. For example, the Defense Contract Management Agency (DCMA) had principal oversight responsibility for the LOGCAP and AFCAP contracts and the BSC, and DCMA generally provided good overall contract oversight, although we found some examples where it could have improved its performance. For example: Effective oversight of the diverse functions performed under the contracts requires government personnel with knowledge and expertise in these specific areas. DCMA contract administrators are contracting professionals, but many have limited knowledge of field operations. In these situations, DCMA normally uses contracting officer’s technical representatives. Contracting officer’s technical representatives are customers who have been designated by their units and appointed and trained by the administrative contracting officer. They provide technical oversight of the contractor’s performance. We found that DCMA had not appointed these representatives at all major sites in Iraq. Officials at the 101st Airborne Division, for example, told us that they had no contracting officer’s technical representatives during their year in Iraq, even though the division used LOGCAP services extensively. For task orders executed in southwest Asia, the AFCAP procuring contracting officer delegated the property administration responsibility to DCMA administrative contracting officers. However, contract administrators in southwest Asia did not ensure that the contractor had established and maintained a property control system to track items acquired under the contract. In addition, DCMA contracting officers in southwest Asia did not have a system in place to document what the contractor was procuring in support of AFCAP task orders and what was being turned over to the Air Force. As a result, as of April 2004, neither DCMA nor the Air Force could account for approximately $2 million worth of tools and construction equipment purchased through the AFCAP contract. An important element of contract administration is the definitizing of task orders, that is, reaching agreement with the contractor on the terms, specifications, or price of services to be delivered. All of the contracts included in our review were cost-plus award fee contracts. These contracts allow the contractor to be reimbursed for reasonable, allowable, and allocable costs incurred to the extent prescribed by the contract and provide financial incentives based on performance. Cost-plus award fee contracts allow the government to evaluate a contractor’s performance according to specified criteria and to grant an award amount within designated parameters. Award fees can serve as a valuable tool to help control program risk and encourage excellence in contract performance. To reap the advantages that cost-plus award fee contracts offer, the government must implement an effective award fee process. Any delays in definitizing task orders, however, make cost-control incentives in these award fee contracts less effective as a cost control tool since there is less work remaining to be accomplished and therefore less costs to be controlled by the contractor. While we found that AFCAP and BSC task orders were definitized quickly, and CONCAP task orders do not require definitization since the terms, specifications, and price are agreed to before work begins, we also found that many LOGCAP task orders remain undefinitized for months, and sometimes more than a year, after they were due to be completed and after billions of dollars of work had been completed. Because task orders have not been definitized, LOGCAP contracting personnel have not conducted an award fee board. I would like to note, however, that this condition is not limited to the LOGCAP contract. We stated in our report released yesterday that the Army Corps of Engineers has yet to definitize its March 2003 contract to rebuild Iraq’s oil infrastructure or one of its contracts to rebuild Iraq’s electrical infrastructure and recommended that the undefinitized contracts and task orders be definitized as soon as possible. DOD agreed with this recommendation and identified a number of steps being taken to do so. We again found mixed results in evaluating the attention to economy and efficiency in the use of contracts. In some cases, we saw military commands actively looking for ways to save money in the contracts. For example, U.S. Army, Europe, reported savings of approximately $200 million under the BSC by reducing labor costs, by reducing services, and by closing or downsizing camps that were no longer needed. The $200 million is almost 10 percent of the current contract ceiling price of $2.098 billion. In addition to these savings, U.S. Army, Europe, routinely sends in teams of auditors from its internal review group to review practices and to make recommendations to improve economy and efficiency. In others, however, most notably the LOGCAP contract in Iraq and Kuwait, we saw very little concern for cost considerations. It was not until December 2003, for example, that the Army instructed commands to look for ways to economize on the use of this contract. Similarly, we found that the Air Force did not always select the most economical and efficient method to obtain services. It used the AFCAP contract to supply commodities for its heavy construction squadrons, although use of the contract to procure and deliver commodity supplies required that the Air Force pay the contractor’s costs plus an additional award fee. Air Force officials said that they used AFCAP because not enough contracting and finance personnel were deployed to buy materials quickly or in large quantities. AFCAP program managers have recognized that the use of a cost-plus award fee contract to buy commodities may not be the most cost-effective method and said that the next version of the contract may allow for either firm-fixed prices or cost-plus fixed fee procurements for commodity purchases. We found that shortages of personnel have also made contract oversight difficult. For example, while DCMA has deployed contracting officers to several countries throughout southwest and central Asia and the Balkans to provide on-site contract administration, DCMA officials believe that additional resources are needed to effectively support the LOGCAP and AFCAP contracts. Administrative contracting officers in Iraq, for example, have been overwhelmed with their duties as a result of the expanding scope of some of the task orders. Additionally, some Army and Air Force personnel with oversight responsibilities did not receive the training necessary to effectively accomplish their jobs. Finally, we found that military units receiving services from the contracts generally lacked a comprehensive understanding of their contract roles and responsibilities. For example, commanders did not understand the part they played in establishing task order requirements, nor did they fully understand the level of support required by the contractors. In conclusion, Mr. Chairman, the United States, along with its coalition partners and various international organizations and donors, has undertaken an enormously complex, costly, and challenging effort to rebuild Iraq in an unstable security environment. At the early stages of these efforts, agency procurement officials were confronted with little advance warning on which to plan and execute competitive procurement actions, an urgent need to begin reconstruction efforts quickly, and uncertainty as to the magnitude and term of work required. Their actions, in large part, reflected proper use of the flexibilities provided under procurement laws and regulations to award new contracts using other than full and open competitive procedures. With respect to several task orders issued under existing contracts, however, some agency officials overstepped the latitude provided by competition laws by ordering work outside the scope of the underlying contracts. This work should have been separately competed, or justified and approved at the required official level for performance by the existing contractor. Importantly, given the war in Iraq, the urgent need for reconstruction efforts, and the latitude allowed by the competition law, these task orders reasonably could have been supported by justifications for other than full and open competition. Logistics support contracts have developed into a useful tool for the military services to quickly obtain needed support for troops deployed to trouble spots around the world. Because of the nature of these contracts, however—that is, cost-plus award fee contracts—they require significant government oversight to make sure they are meeting needs in the most economic and efficient way possible in each circumstance. While the military services are learning how to use these contracts well, in many cases the services are still not achieving the most cost-effective performance and are not adequately learning and applying the lessons of previous deployments. Because of the military’s continuing and growing reliance on these contracting vehicles, it is important that improvements be made and that oversight be strengthened. Mr. Chairman and Members of the committee, this concludes my statement. I will be happy to answer any question you may have. For further information, please contact Neal P. Curtin at (757) 552-8111 or curtinn@gao.gov or William T. Woods at (202) 512-4841 or woodsw@gao.gov. Individuals making key contributions to this statement include Robert Ackley, Ridge Bowman, Carole Coffey, Laura G. Czohara, Gary Delaney, Timothy J. DiNapoli, George M. Duncan, Glenn D. Furbish, C. David Groves, John Heere, Chad Holmes, Oscar W. Mardis, Kenneth E. Patton, Ron Salo, Steven Sternlieb, Matthew W. Ullengren, John Van Schaik, Adam Vodraska, Cheryl A. Weissman, and Tim Wilson. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The General Accounting Office (GAO) discussed some of the work the it is undertaking to address various operations and rebuilding efforts in Iraq. Specifically, GAO has a body of ongoing work looking at a range of issues involving Iraq, including Iraq's transitional administrative law, efforts to restore essential services to the Iraqi people, and the effectiveness of logistics activities during Operation Iraqi Freedom, among others. Importantly, given the challenging security environment in Iraq and the various other accountability organizations involved in the oversight process, it is attempting to coordinate its engagement planning and execution with other organizations as appropriate. In this testimony it discussed (1) its report (GAO-04-605) that was released yesterday on the contract award procedures for contracts awarded in fiscal year 2003 to help rebuild Iraq and (2) its preliminary findings on the military's use of global logistics support contracts. These support contracts have emerged as important tools in providing deployed military services with a wide range of logistics services. With regard to the award of fiscal year 2003 Iraq reconstruction contracts, GAO found that agencies generally complied with applicable laws and regulations governing competition when using sole-source or limited competition approaches to award new contracts. However, they did not always do so when issuing task orders under existing contracts. In several instances, GAO found that contracting officers issued task orders for work that was not within the scope of the underlying contracts and which should have been awarded using competitive procedures or, because of the exigent circumstances involved, supported by a justification for other than full and open competition in accordance with legal requirements. With regard to DOD's use of global logistics support contracts, GAO found mixed results in each of the four areas it reviewed: planning, oversight, efficiency, and personnel. GAO also found that while some military commands actively looked for ways to save money, others exhibited little concern for cost considerations. Finally, shortages in personnel trained in contract management and oversight is also an issue that needs to be addressed. The report will make a number of recommendations to address these shortcomings.
Since fiscal year 2000, DOD has significantly increased the number of major defense acquisition programs and its overall investment in them. During this same time period, acquisition outcomes have not improved. For example, in last year’s assessment of selected DOD weapon programs, we found that total acquisition costs for the fiscal year 2007 portfolio of major defense acquisition programs increased 26 percent and development costs increased by 40 percent from first estimates—both of which are higher than the corresponding increases in DOD’s fiscal year 2000 portfolio. In most cases, the programs we assessed failed to deliver capabilities when promised—often forcing warfighters to spend additional funds on maintaining legacy systems. Our analysis showed that current programs experienced, on average, a 21-month delay in delivering initial capabilities to the warfighter, a 5-month increase over fiscal year 2000 programs as shown in table 1. Continued cost growth results in less funding being available for other DOD priorities and programs, while continued failure to deliver weapon systems on time delays providing critical capabilities to the warfighter. We are currently updating our analysis and intend to issue our assessment of DOD’s current portfolio in March. Several underlying systemic problems at the strategic level and at the program level continue to contribute to poor weapon system program outcomes. At the strategic level, DOD does not prioritize weapon system investments and the department’s processes for matching warfighter needs with resources are fragmented and broken. DOD largely continues to define warfighting needs and make investment decisions on a service- by-service basis and assess these requirements and their funding implications under separate decision-making processes. Ultimately, the process produces more demand for new programs than available resources can support, promoting an unhealthy competition for funds that encourages programs to pursue overly ambitious capabilities, develop unrealistically low cost estimates and optimistic schedules, and to suppress bad news. Similarly, DOD’s funding process does little to prevent programs from going forward with unreliable cost estimates and lengthy development cycles, which is not a sound basis for allocating resources and ensuring program stability. Invariably, DOD and the Congress end up continually shifting funds to and from programs—undermining well- performing programs to pay for poorly performing ones. At the program level, programs are started without knowing what resources will truly be needed and are managed with lower levels of product knowledge at critical junctures than expected under best practices standards. For example, in our March 2008 assessment, we found that only 12 percent of the 41 programs we reviewed had matured all critical technologies at the start of the development effort. None of the 26 programs we reviewed that were at or had passed their production decisions had obtained adequate levels of knowledge. In the absence of such knowledge, managers rely heavily on assumptions about system requirements, technology, and design maturity, which are consistently too optimistic. These gaps are largely the result of a lack of a disciplined systems engineering analysis prior to beginning system development, as well as DOD’s tendency to allow new requirements to be added well into the acquisition cycle. This exposes programs to significant and unnecessary technology, design, and production risks, and ultimately damaging cost growth and schedule delays. With high-levels of uncertainty about technologies, design, and requirements, program cost estimates and related funding needs are often understated, effectively setting programs up for failure. When DOD consistently allows unsound, unexecutable programs to pass through the requirements, funding, and acquisition processes, accountability suffers. Program managers cannot be held accountable when the programs they are handed already have a low probability of success. Moreover, program managers are not empowered to make go or no-go decisions, have little control over funding, cannot veto new requirements, have little authority over staffing, and are frequently changed during a program’s development. Consequently, DOD officials are rarely held accountable for these poor outcomes, and the acquisition environment does not provide the appropriate incentives for contractors to stay within cost and schedule targets, making them strong enablers of the status quo. With regard to improving its acquisition of weapon systems, DOD has made changes consistent with the knowledge-based approach to weapons development that GAO has recommended in its work. In December 2008, DOD revised DOD Instruction 5000.02, which provides procedures for managing major defense acquisition programs in ways that aim to provide key department leaders with the knowledge needed to make informed decisions before a program starts and to maintain discipline once it begins. For example, the revised instruction includes procedures for the completion of key systems engineering activities before the start of the systems development, a requirement for more prototyping early in programs, and the establishment of review boards to monitor weapon system configuration changes. We have previously raised concerns, however, with DOD’s implementation of guidance on weapon systems acquisition. At the same time, DOD must begin making better choices that reflect joint capability needs and match requirements with resources. Given the nation’s ongoing financial and economic crisis, DOD’s investment decisions cannot continue to be driven by the military services that propose programs that overpromise capabilities and underestimate costs simply to start and sustain development programs. DOD has increasingly relied on contractors to support its missions and operations, due in part to such factors as the reductions in DOD’s civilian and military personnel following the collapse of the Soviet Union, the increasing complexity of weapons systems, and more recently, the increased demands related to the global war on terrorism, such as the need for large numbers of Arabic speakers. DOD officials have stated that without a significant increase in its civilian and military workforce, the department is likely to continue to rely on contractors both in the United States and overseas in support of future deployments. For example, in October 2008, the then-Under Secretary of the Army stated that the Army has more requirements than available force structure and that much of the Army’s mission would be impossible without the support provided by contractors. Similarly, the Deputy Under Secretary of Defense for Logistics and Materiel Readiness testified in 2008 that the structure of the U.S. military has been adapted to an environment in which contractors are an indispensable part of the force. In that regard, DOD estimated that more than 230,000 contractor personnel were supporting operations in Iraq and Afghanistan as of October 2008. This reliance on contractors to support DOD’s current mission was not the result of a strategic or deliberate process but resulted from thousands of individual decisions to use contractors to provide specific capabilities. As the Secretary of Defense testified last month, DOD has not thought holistically or coherently about the department’s use of contractors particularly when it comes to combat environments. DOD has long- standing guidance for determining the appropriate mix of manpower— military, civilian, and contractors—necessary to accomplish the department’s mission. This guidance, however, is primarily focused on individual decisions whether to use contractors to provide specific capabilities and not the overarching question of what the appropriate role of contractors should be. In October 2008, the Under Secretary of the Army acknowledged that DOD has not made much progress in assessing the appropriate role of contractors on the battlefield and stated that any serious or purposeful discussion about the future size of the Army must include the role of contractors. We have increasingly called for DOD to be more strategic in how it uses contractors. For example, in November 2006, we reported that DOD lacked a proactive strategic approach to managing services acquisitions and needed to determine, among other things, areas of specific risks that were inherent when acquiring services and that should be managed with greater attention. Indeed, we have called on DOD to conduct a fundamental reexamination of when and under what circumstances DOD should use contractors as opposed to civil servants or military personnel. Similarly, in January 2008, we testified that DOD needs to determine the appropriate balance between contractors and military personnel in deployed locations. Without a fundamental understanding of its reliance on contractors and the capabilities they should provide, DOD’s ability to mitigate the risks associated with using contractors is limited. Our previous work has highlighted several examples of the risks inherent to using contractors, including ethics concerns, diminished institutional capacity, potentially greater costs, and mission risks. Examples include: Certain contractor employees often work side-by-side with government employees, performing such tasks as studying alternative ways to acquire desired capabilities, developing contract requirements, and advising or assisting on source selection, budget planning, and award-fee determinations. Contractor employees are generally not subject, however, to the same laws and regulations that are designed to prevent conflicts of interests among federal employees. The Army Contracting Agency’s Contracting Center of Excellence relied on contractors to support acquisition and contracting decisions, which raised concerns about the Army’s efforts to mitigate the risks of conflicts of interest or losing control over decision making. Similarly, for 11 Air Force space program offices, contractors accounted for 64 percent of cost- estimating personnel, raising questions from the cost-estimating community about whether numbers and qualifications of government personnel are sufficient to provide oversight of and insight into contractor cost estimates. One underlying premise of using contractors is that doing so will be more cost-effective than using government personnel. This may not always be the case. In one instance, we found that the Army Contracting Agency’s Contracting Center of Excellence was paying up to 27 percent more for contractor-provided contract specialists than it would have for similarly graded government employees. Reliance on contractors can create mission risks when contractors are supporting deployed forces. For example, because contractors cannot be ordered to serve in contingency environments, the possibility that they will not deploy can create risks that the mission they support may not be effectively carried out. Further, if commanders are unaware of their reliance on contractors they may not realize that substantial numbers of military personnel may be redirected from their primary responsibilities to provide force protection or assume functions anticipated to be performed by contractors and commanders therefore may not plan accordingly. The Chairman of the Joint Chiefs of Staff has directed the Joint Staff to examine the use of DOD service contracts (contractors) in Iraq and Afghanistan in order to better understand the range and depth of contractor capabilities necessary to support the Joint Force. In assessing the appropriate role of contractors, it is important to recognize that contractors can provide important benefits such as flexibility to fulfill immediate needs. In some cases, DOD’s specific needs may be too limited, too technical or have other characteristics that do not make it cost-effective for DOD to develop an organic capability. For example, we reported in 2008 that the repair of battle-damaged Stryker vehicles was contracted out because DOD did not have people with the specific welding skills required to perform this type of repair. In other cases, contractors are used because they are cheaper. For example, we reported in 2007 that the Army’s decision to contract for the operation and maintenance of the firing range at Fort Hood resulted in an estimated $6 million savings. In addition, both DOD and others have stated the department has limited capacity to pick up some or all of the capabilities currently provided by contractors. For example, DOD has reported that replacing the 13,000 armed private security contractors currently supporting the department in Iraq and Afghanistan, would require at least an additional 40,000 military personnel, given DOD’s current rotation policies. Once the decision has been made to use contractors to support DOD’s missions or operations, it is essential that DOD clearly defines its requirements and employs sound business practices, such as using appropriate contracting vehicles and the collection and distribution of critical information. Our work, however, on DOD’s use of time-and- materials contracts and undefinitized contract actions—two contracting practices that are often used when requirements are uncertain or changing—identified weaknesses in DOD’s management and oversight, increasing the government’s risk. Examples include: In June 2007, we found numerous issues with DOD’s use of time-and- materials contracts. DOD reported that it obligated nearly $10 billion under time-and-materials contracts in fiscal year 2005, acquiring, among other services, professional, administrative, and management support services. Some specific examples of the services DOD acquired included subject matter experts in the intelligence field and systems engineering support. These contracts are appropriate when specific circumstances justify the risks, but our findings indicate that they are often used as a default for a variety of reasons—ease, speed, and flexibility when requirements or funding are uncertain. Time-and-materials contracts are considered high risk for the government because they provide no positive profit incentive to the contractor for cost control or labor efficiency and their use is supposed to be limited to cases where no other contract type is suitable. We found, however, that DOD underreported its use of time-and- materials contracts; frequently did not justify why time-and-materials contracts were the only contract type suitable for the procurement; made few attempts to convert follow-on work to less risky contract types; and was inconsistent in the rigor with which contract monitoring occurred. In that same month, we reported that DOD needed to improve its management and oversight of undefinitized contract actions (UCAs), under which DOD can authorize contractors to begin work and incur costs before reaching a final agreement on contract terms and conditions, including price. The contractor has little incentive to control costs during this period, creating a potential for wasted taxpayer dollars. We found that DOD did not know the full extent it used UCAs because the government’s federal procurement data system did not track UCAs awarded under certain contract actions, such as task or delivery order contracts. Moreover, we found that (1) the use of some UCAs could have been avoided with better acquisition planning; (2) DOD frequently did not definitize the UCAs within the required time frames thereby increasing the cost risk to the government; and (3) contracting officers were not documenting the basis for the profit or fee negotiated, as required. We called on DOD to strengthen management controls and oversight of UCAs to reduce the risk of DOD paying unnecessary costs and potentially excessive profit rates. In a separate report, issued in July 2007, we found that DOD’s failure to adhere to key contracting principles on a multibillion dollar contract to restore Iraq’s oil infrastructure increased the government’s risk. In this case, we found that the lack of timely negotiations on task orders that were issued as UCAs contributed significantly to DOD’s decision to pay nearly all of the $221 million in costs questioned by the Defense Contract Audit Agency (DCAA). All 10 task orders we reviewed were negotiated more than 180 days after the work commenced, and the contractor had incurred almost all its costs at the time of negotiations. The negotiation delays were in part caused by changing requirements, funding challenges, and inadequate contractor proposals. Our previous work has also identified cost and oversight risks associated with inconsistent or limited collection and distribution of information. Examples include: Our 2008 review of several Army service contracts found that the Army’s oversight of some of the contracts was inadequate due in part to contracting offices not maintaining complete contract files documenting contract administration and oversight actions taken, in accordance with DOD policy and guidance. As a result, incoming contract administration personnel did not know whether the contractors were meeting their contract requirements effectively and efficiently and therefore were limited in their ability to make informed decisions related to award fees, which can run into the millions of dollars. In addition, several GAO reports and testimonies have noted that despite years of experience using contractors to support deployed forces in the Balkans, Southwest Asia, Iraq, and Afghanistan, DOD has made few efforts to systematically collect and share lessons learned regarding the oversight and management of contractors supporting deployed forces. As a result, many of the management and oversight problems we identified in earlier operations have recurred in current operations. Moreover, without the sharing of lessons learned, substantial increases in forces in Afghanistan are likely to exacerbate those contract management and oversight challenges already present in Afghanistan. Properly managing the acquisition of services requires a workforce with the right skills and capabilities. In that regard, there are a number of individuals and organizations involved in the acquisition process, including contracting officers who award contracts, as well as those individuals who define requirements, receive or benefit from the services provided, and oversee contractor performance, including DCAA and the Defense Contract Management Agency (DCMA). We and others have raised questions whether DOD has a sufficient number of trained acquisition and contract oversight personnel to meet its needs. For example, the increased volume of contracting is far in excess of the growth in DOD contract personnel. Between fiscal years 2001 and 2008, DOD obligations on contracts when measured in real terms, have more than doubled to over $387 billion in total, and to more than $200 billion just for services. Over the same time period, however, DOD reports its contracting career field grew by only about 1 percent as shown in figure 1. In 2008, DOD completed an assessment of its contracting workforce, in which more than 87 percent of its contracting workforce participated. DOD reports that this assessment provides a foundation for understanding the skills and capabilities its workforce currently and is in the process of determining how to close those gaps, such as through training or hiring additional personnel. DOD, however, lacks information on the competencies and skills needed in its entire workforce, particularly those who provide oversight or play other key roles in the acquisition process. We are currently assessing DOD’s ability to determine the sufficiency of its acquisition workforce and its efforts to improve its workforce management and oversight and will be issuing a report in the spring. Having too few contract oversight personnel presents unique difficulties at deployed locations given the more demanding operational environment compared to the United States because of an increased operational tempo, security considerations, and other factors. We and others have found significant deficiencies in DOD’s oversight of contractors because of an inadequate number of trained personnel to carry out these duties. Examples include: We noted in January and September 2008 that the lack of qualified personnel hindered oversight of contracts to maintain military equipment in Kuwait and provide linguist services in Iraq and Afghanistan. We found that without adequate levels of qualified oversight personnel, DOD’s ability to perform the various tasks needed to monitor contractor performance may be hindered. For example, we found that poor contractor performance can result in the warfighter not receiving equipment in a timely manner. In addition, the Army Inspector General reported in October 2007 that shortages of contracting officers, quality assurance personnel, and technically proficient contracting officer’s representatives were noticeable at all levels, while the 2007 Commission on Army Acquisition and Program Management in Expeditionary Operations (the Gansler Commission) noted that shortages in personnel contributed to fraud, waste, and abuse in theatre. If left unaddressed, the problems posed by personnel shortages in Iraq and elsewhere are likely to become more significant in Afghanistan as we increase the number of forces and the contractors who support them there. An additional, long-standing challenge hindering management and oversight of contractors supporting deployed forces is the lack of training for military commanders and oversight personnel. As we testified in 2008, limited or no pre-deployment training on the use of contractor support can cause a variety of problems for military commanders in a deployed location, such as being unable to adequately plan for the use of those contractors and confusion regarding the military commanders’ roles and responsibilities in managing and overseeing contractors. Lack of training also affects the ability of contract oversight personnel to perform their duties. The customer (e.g., a military unit) for contractor-provided services at deployed locations is responsible for evaluating the contractor’s performance and ensuring that contractor-provided services are used in an economical and efficient manner. Often this involves the use of contracting officer’s representatives—individuals typically drawn from units receiving contractor-provided services, who are not normally contracting specialists, and for whom contract monitoring is an additional duty. We have repeatedly found that contract oversight personnel received little or no pre-deployment training on their roles and responsibilities in monitoring contractor performance, hindering the ability of those individuals to effectively manage and oversee contractors. While performing oversight is often the responsibility of military service contracting officers or their representatives, DCAA and DCMA play key roles in the oversight process. DCAA provides a critical internal control function on behalf of DOD and other federal agencies by performing a range of contract audit services, including reviewing contractors’ cost accounting systems, conducting audits of contractor cost proposals and payment invoices, and providing contract advisory services to help assure that the government pays fair and reasonable prices. To be an effective control, DCAA must perform reliable audits. In a report we issued in July 2008, however, we identified a serious noncompliance with generally accepted government auditing standards at three field audit offices responsible for billions of dollars of contracting. For example, we found that workpapers did not support reported opinions and sufficient audit work was not performed to support audit opinions and conclusions. As a result, DCAA cannot assure that these audits provided reliable information to support sound contract management business decisions or that contract payments are not vulnerable to significant amounts of fraud, waste, abuse, and mismanagement. The DCAA Director subsequently acknowledged agencywide problems and initiated a number of corrective actions. In addition, DOD included DCAA’s failure to meet professional standards as a material internal control weakness in its fiscal year 2008 agency financial report. We are currently assessing DCAA’s corrective actions and anticipate issuing a report later this spring. Similarly, DCMA provides oversight at more than 900 contractor facilities in the United States and across the world, providing contract administration services such as monitoring contractors’ performance and management systems to ensure that cost, performance, and delivery schedules comply with the terms and conditions of the contracts. DCMA has also assumed additional responsibility for overseeing service contracts in Iraq, Afghanistan, and other deployed locations, including contracts that provide logistical support and private security services. In a July 2008 report, we noted that DCMA had increased staffing in these locations only by shifting resources from other locations and had asked the services to provide additional staff since DCMA did not have the resources to meet the requirement. As a result, it is uncertain whether DCMA has the resources to meet its commitments at home and abroad. GAO’s body of work on contract management and the use of contractors to support deployed forces has resulted in numerous recommendations over the last several years. In response, DOD has issued guidance to address contracting weaknesses and promote the use of sound business arrangements. For example, in response to congressional direction and GAO recommendations, DOD has established a framework for reviewing major services acquisitions; promulgated regulations to better manage its use of contracting arrangements that can pose additional risks for the government, including time-and-materials contracts and undefinitized contracting actions; and has efforts under way to identify and improve the skills and capabilities of its workforce. For example, in response to recommendations from the Gansler Commission, the Army has proposed increasing its acquisition workforce by over 2,000 personnel. However, the Army also acknowledged that this process will take at least 3 to 5 years to complete. DOD has also taken specific steps to address contingency contracting issues. GAO has made numerous recommendations over the past 10 years aimed at improving DOD’s management and oversight of contractors supporting deployed forces, including the need for (1) DOD-wide guidance on how to manage contractors that support deployed forces, (2) improved training for military commanders and contract oversight personnel, and (3) a focal point within DOD dedicated to leading DOD’s efforts to improve the management and oversight of contractors supporting deployed forces. As we reported in November 2008, DOD has been developing, revising, and finalizing new joint policies and guidance on the department’s use of contractors to support deployed forces (which DOD now refers to as operational contract support). Examples include: In October 2008, DOD finalized Joint Publication 4-10, “Operational Contract Support,” which establishes doctrine and provides standardized guidance for planning, conducting, and assessing operational contract support integration and contractor management functions in support of joint operations. DOD is revising DOD Instruction 3020.41, “Program Management for the Preparation and Execution of Acquisitions for Contingency Operations,” which strengthens the department’s joint policies and guidance on program management, including the oversight of contractor personnel supporting a contingency operation. DOD has also taken steps to improve the training of military commanders and contract oversight personnel. As we reported in November 2008, the Deputy Secretary of Defense issued a policy memorandum in August 2008 directing the appointment of trained contracting officer’s representatives prior to the award of contracts. U.S. Joint Forces Command is developing two training programs for non-acquisition personnel to provide information necessary to operate effectively on contingency contracting matters and work with contractors on the battlefield. In addition, the Army has a number of training programs available that provide information on contract management and oversight to operational field commanders and their staffs. The Army is also providing similar training to units as they prepare to deploy, and DOD, the Army, and the Marine Corps have begun to incorporate contractors and contract operations in mission rehearsal exercises. In October 2006, the Deputy Under Secretary of Defense for Logistics and Materiel Readiness established the office of the Assistant Deputy Under Secretary of Defense (Program Support) to act as the focal point for DOD’s efforts to improve the management and oversight of contractors supporting deployed forces. This office has taken several steps to help formalize and coordinate efforts to address issues related to contractor support to deployed forces. For example, the office took a leading role in establishing a community of practice for operational contract support— comprising subject matter experts from the Office of the Secretary of Defense, the Joint Staff, and the services—that may be called upon to work on a specific task or project. Additionally, the office helped establish a Council of Colonels, which serves as a “gatekeeper” for initiatives, issues, or concepts, as well as a Joint Policy Development General Officer Steering Committee, which includes senior commissioned officers or civilians designated by the services. The committee’s objective is to guide the development of the Office of the Secretary of Defense, Joint Staff, and service policy, doctrine, and procedures to adequately reflect situational and legislative changes as they occur within operational contract support. DOD has recognized it faces challenges with weapons systems acquisition and contract management and the department has taken steps to address these challenges, including those outlined in this testimony. The current economic crisis presents an opportunity and an imperative for DOD to act forcefully to implement new procedures and processes in a sustained, consistent, and effective manner across the department. In this context, to overcome these issues, the department needs to take additional actions. These include: In the near-term, DOD needs to ensure that existing and future guidance is fully complied with and implemented. Doing so will require continued, sustained commitment by senior DOD leadership to translate policy into practice and to hold decision makers accountable. At the same time, the department and its components have taken or plan to take actions to further address weapons systems acquisition and contract management challenges. However, many of these actions, such as the Army’s efforts to increase its acquisition workforce, will not be fully implemented for several years and progress will need to be closely monitored to ensure the steps undertaken result in their intended outcomes. Risk is inherent when relying on contractors to support DOD missions. At the departmentwide level, DOD has yet to conduct the type of fundamental reexamination of its reliance on contractors that we called for in 2008. Without understanding the depth and breadth of contractor support, the department will be unable to determine if it has the appropriate mix of military personnel, DOD civilians, and contractors. As a result, DOD may not be totally aware of the risks it faces and will therefore be unable to mitigate those risks in the most cost-effective and efficient manner. The implementation of existing and emerging policy, monitoring of the department’s actions, and the comprehensive assessment of what should and should not be contracted for are not easy tasks, but they are essential if DOD is to place itself in a better position to deliver goods and services to the warfighters. Moreover, with an expected increase of forces in Afghanistan, the urgency for action is heightened to help the department avoid the same risks of fraud, waste, and abuse it has experienced using contractors in support of Operation Iraqi Freedom. Mr. Chairman, this concludes my prepared statement. I will be pleased to answer any questions you or members of the subcommittee may have at this time. For further information about this testimony, please contact Janet St. Laurent, Managing Director, Defense Capabilities and Management on (202) 512-4402 or stlaurentj@gao.gov or Katherine V. Schinasi, Managing Director, Acquisition and Sourcing Management on (202) 512-4841 or schinasik@gao.gov. Other key contributors to this testimony include Karyn Angulo, Carole Coffey, Grace Coleman, Timothy DiNapoli, Gayle Fischer, Dayna Foster, Angie Nichols-Friedman, John Hutton, Julia Kennon, James A. Reynolds, William M. Solis, and Karen Thornton. Modernizing the Outdated U.S. Financial Regulatory System (New) Protecting Public Health through Enhanced Oversight of Medical Products (New) High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 2009. Defense Acquisitions: Fundamental Changes Are Needed to Improve Weapon Program Outcomes. GAO-08-1159T. Washington, D.C.: September 25, 2008. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-08-467SP. Washington, D.C.: March 31, 2008. Defense Acquisitions: A Knowledge-Based Funding Approach Could Improve Major Weapon System Program Outcomes. GAO-08-619. Washington, D.C.: July 2, 2008. Best Practices: Increased Focus on Requirements and Oversight Needed to Improve DOD’s Acquisition Environment and Weapon System Quality. GAO-08-294. Washington, D.C.: February 1, 2008. Space Acquisitions: Actions Needed to Expand and Sustain Use of Best Practices. GAO-07-730T. Washington, D.C.: April 19, 2007. Defense Acquisitions: DOD’s Requirements Determination Process Has Not Been Effective in Prioritizing Joint Capabilities. GAO-08-1060. Washington, D.C.: September 25, 2008. Tactical Aircraft: DOD Needs a Joint and Integrated Investment Strategy. GAO-07-415. Washington, D.C.: April 2, 2007. Best Practices: An Integrated Portfolio Management Approach to Weapon System Investments Could Improve DOD’s Acquisition Outcomes. GAO-07-388. Washington, D.C.: March 30, 2007. Defense Acquisitions: Cost to Deliver Zumwalt-Class Destroyers Likely to Exceed Budget. GAO-08-804. Washington, D.C.: July 31, 2008. Defense Acquisitions: Progress Made in Fielding Missile Defense, but Program Is Short of Meeting Goals. GAO-08-448. Washington, D.C.: March 14, 2008. Joint Strike Fighter: Recent Decisions by DOD Add to Program Risks. GAO-08-388. Washington, D.C.: March 11, 2008. Defense Acquisitions: 2009 Is a Critical Juncture for the Army’s Future Combat System. GAO-08-408. Washington, D.C.: March 7, 2008. DCAA Audits: Allegations That Certain Audits at Three Locations Did Not Meet Professional Standards Were Substantiated. GAO-08-857. Washington, D.C.: July 22, 2008. Defense Contracting: Post-Government Employment of Former DOD Officials Needs Greater Transparency. GAO-08-485. Washington, D.C.: May 21, 2008. Defense Contracting: Army Case Study Delineates Concerns with Use of Contractors as Contract Specialists. GAO-08-360. Washington, D.C.: March 26, 2008. Defense Contracting: Additional Personal Conflict of Interest Safeguards Needed for Certain DOD Contractor Employees. GAO-08-169. Washington, D.C.: March 7, 2008. Defense Contract Management: DOD’s Lack of Adherence to Key Contracting Principles on Iraq Oil Contract Put Government Interests at Risk. GAO-07-839. Washington, D.C.: July 31, 2007. Defense Contracting: Improved Insight and Controls Needed over DOD’s Time-and-Materials Contracts. GAO-07-273. Washington, D.C.: June 29, 2007. Defense Contracting: Use of Undefinitized Contract Actions Understated and Definitization Time Frames Often Not Met. GAO-07-559. Washington, D.C.: June 19, 2007. Defense Acquisitions: Improved Management and Oversight Needed to Better Control DOD’s Acquisition of Services. GAO-07-832T, Washington, D.C.: May 10, 2007 Defense Acquisitions: Tailored Approach Needed to Improve Service Acquisition Outcomes. GAO-07-20. Washington, D.C.: November 9, 2006. Contract Management: DOD Developed Draft Guidance for Operational Contract Support but Has Not Met All Legislative Requirements. GAO-09-114R. Washington, D.C.: November 20, 2008. Contingency Contracting: DOD, State, and USAID Contracts and Contractor Personnel in Iraq and Afghanistan. GAO-09-19. Washington, D.C.: October 1, 2008. Military Operations: DOD Needs to Address Contract Oversight and Quality Assurance Issues for Contracts Used to Support Contingency Operations. GAO-08-1087. Washington, D.C: September 26, 2008. Rebuilding Iraq: DOD and State Department Have Improved Oversight and Coordination of Private Security Contractors in Iraq, but Further Actions Are Needed to Sustain Improvements. GAO-08-966. Washington, D.C.: July 31, 2008. Defense Management: DOD Needs to Reexamine Its Extensive Reliance on Contractors and Continue to Improve Management and Oversight. GAO-08-572T. Washington, D.C.: March 11, 2008. Military Operations: Implementation of Existing Guidance and Other Actions Needed to Improve DOD’s Oversight and Management of Contractors in Future Operations. GAO-08-436T. Washington, D.C.: January 24, 2008. Defense Acquisitions: DOD’s Increased Reliance on Service Contractors Exacerbates Longstanding Challenges. GAO-08-621T. Washington, D.C.: January 23, 2008. Defense Logistics: The Army Needs to Implement an Effective Management and Oversight Plan for the Equipment Maintenance Contract in Kuwait. GAO-08-316R. Washington, D.C.: January 23, 2008. Military Operations: High-Level DOD Action Needed to Address Long- standing Problems with Management and Oversight of Contractors Supporting Deployed Forces. GAO-07-145. Washington, D.C.: December 18, 2006. Rebuilding Iraq: Continued Progress Requires Overcoming Contract Management Challenges. GAO-06-1130T. Washington, D.C.: September 28, 2006. Military Operations: Background Screenings of Contractor Employees Supporting Deployed Forces May Lack Critical Information, but U.S. Forces Take Steps to Mitigate the Risks Contractors May Pose. GAO-06-999R. Washington, D.C.: September 22, 2006. Rebuilding Iraq: Actions Still Needed to Improve the Use of Private Security Providers. GAO-06-865T. Washington, D.C.: June 13, 2006. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Today's testimony addresses the challenges DOD faces to improve the efficiency and effectiveness of its weapon systems acquisition and contract management. GAO has designated both areas as high risk areas since the early 1990s. DOD's major weapon systems programs continue to take longer to develop, cost more, and deliver fewer quantities and capabilities than originally planned. DOD also continues to face long-standing challenges managing service contracts and contractors. For example, the oversight of service contracts has been recognized as a material weakness in the Army. The current fiscal environment combined with the current operational demands elevates the need to improve weapon systems acquisition and contract management. DOD has taken steps in response to recommendations GAO has made over the past decade. Taken collectively, these actions reflect the commitment of DOD senior leadership. However, to fully address these challenges the department needs to (1) translate policy into practice, (2) ensure steps undertaken result in intended outcomes, and (3) conduct a fundamental reexamination of its reliance on contractors. In preparing this testimony, GAO drew from issued reports, containing statements of scope and methodology used, and testimonies. Several underlying systemic problems at the strategic level and at the program level continue to contribute to poor weapon systems acquisition. The total acquisition cost of DOD's 2007 portfolio of major programs has grown by 26 percent over initial estimates. At the strategic level, DOD does not prioritize weapon system investments, and its processes for matching warfighter needs with resources are fragmented and broken. DOD largely continues to define warfighting needs and make investment decisions on a service-by-service basis and assesses these requirements and their funding implications under separate decision-making processes. Invariably, DOD and the Congress end up continually shifting funds to and from programs--undermining well-performing programs to pay for poorly performing ones. At the program level, weapon system programs are initiated without sufficient knowledge about requirements, technology, and design maturity. Instead, managers rely on assumptions that are consistently too optimistic, exposing programs to significant and unnecessary risks and ultimately cost growth and schedule delays. In December 2008, DOD revised its guidance to improve its acquisition of major weapon systems, consistent with recommendations GAO has made. We have previously raised concerns, however, with DOD's implementation of guidance on weapon systems acquisition. In fiscal year 2008, DOD obligated about $200 billion for contractor-provided services, more than doubling the amount it spent a decade ago when measured in real terms. GAO's previous work has highlighted several examples of the risks inherent in using contractors, including ethics concerns, diminished institutional capacity, potentially greater costs, and mission risks. Further, the lack of well-defined requirements, difficulties employing sound business practices, and workforce and training issues hinder efforts to effectively manage and oversee contracts and contractors. These factors ultimately contribute to higher costs, schedule delays, unmet goals, and negative operational impacts. These issues take on a heightened significance in Iraq and Afghanistan, where DOD estimated that more than 200,000 contractor personnel were engaged as of July 2008, exceeding the number of uniformed military personnel there. As of October 2008, the number of contractor personnel in both countries had increased to over 230,000. DOD has taken several steps in response to GAO's recommendations aimed at improving management and oversight of contractors. These include issuing policy and guidance addressing contract management, identifying skill gaps in DOD's acquisition workforce, improving training for military commanders and contract oversight personnel, and creating a focal point within the department for issues associated with the use of contractors to support deployed forces. DOD, however, has not conducted a comprehensive assessment to determine the appropriate mix of military, civilian, and contractor personnel.
The 1952 Immigration and Nationality Act, as amended, is the primary body of law governing immigration and visa operations. The Homeland Security Act of 2002 generally grants DHS exclusive authority to issue regulations on, administer, and enforce the Immigration and Nationality Act and all other immigration and nationality laws relating to the functions of U.S. consular officers in connection with the granting or denial of visas. As we reported in July 2005, the act also authorizes DHS to, among other things, assign employees to any consular post to review individual visa applications and provide expert advice and training to consular officers regarding specific security threats related to the visa process. A subsequent September 2003 Memorandum of Understanding between State and DHS further outlines the responsibilities of each agency with respect to visa issuance. DHS is responsible for establishing visa policy, reviewing implementation of the policy, and providing additional direction. State manages the visa process, as well as the consular corps and its functions at 211 visa-issuing posts overseas. The process for determining who will be issued or refused a visa contains several steps, including documentation reviews, in-person interviews, collection of biometrics (fingerprints), and cross-referencing an applicant’s name against the Consular Lookout and Support System— State’s name-check database that posts use to access critical information for visa adjudication. In some cases, a consular officer may determine the need for a Security Advisory Opinion, which is a response from Washington on whether to issue a visa to the applicant. Depending on a post’s applicant pool and the number of visa applications that a post receives, each stage of the visa process varies in length. According to consular officials, posts that consistently have wait times for visa interview appointments of 30 days or longer may have a resource or management problem. To monitor posts’ workload, State requires that posts report, on a weekly basis, the wait times for applicant interviews. As of March 2006, State’s data showed that between September 2005 and February 2006, 97 posts reported maximum wait times of 30 or more days in at least one month; at 20 posts, the reported wait times were in excess of 30 days for the entire 6-month period. Moreover, in February 2006, nine posts reported wait times in excess of 90 days (see table 1). According to the Assistant Secretary of State for Consular Affairs, managing consular workload is a major issue for the department, particularly at posts in India and China where volume is expected to continue to increase. In February 2004, we reported that officials at some of the posts we visited in India and China indicated they did not have enough space and staffing resources to handle interview demands and the new visa requirements. According to consular officers, during the 2003 summer months, the wait for visa interviews was as long as 12 weeks in Chennai, India. In China, applicants at one post were facing waits of about 5 to 6 weeks during our September 2003 visit due to an imbalance between demand for visas and the number of consular officers available to interview applicants and staff to answer phones. Although these posts have undertaken initiatives to shorten the wait times, such as using temporary duty help and instituting longer interviewing hours, delays for visa interviews remain an ongoing concern. For example, the U.S. embassy in New Delhi instituted a new appointment system in October 2005, which resulted in immediate, additional interviewing capacity at post, according to consular officials. However, reported wait times in New Delhi had risen above 90 days by February 2006 (see table 2). At posts in China, Consular Affairs indicated that improvements in facilities and staff increases have helped to lessen wait times for interviews. However, consular officials have acknowledged that demand for visas at posts in China is likely to rise and continue to affect wait times in the future. Table 3 shows recent wait times for visa appointments in China. Although we have not attempted to measure the impact of the time it takes to adjudicate a visa, we reported in February 2004 that consular officials and representatives of several higher education, scientific, and governmental organizations reported that visa delays could be detrimental to the scientific interests of the United States. Although these officials and representatives provided numerous individual examples of the consequences of visa delays, they were unable to measure the total impact of such lengthy waits. For example, in September 2003, Department of Energy officials in Moscow explained that former Soviet Union scientists have found it extremely difficult to travel to the United States to participate in U.S. government-sponsored conferences and exchanges that are critical to nonproliferation efforts. Business groups have also expressed concern about the impact of visa delays. For example, officials from the American Chamber of Commerce and other industry executives have testified numerous times in recent years about the problem of delayed entry for foreign nationals traveling to the United States for legitimate business purposes. In addition, on June 2, 2004, a coalition of eight industry associations published a study estimating that U.S. companies suffered losses totaling $30 billion from July 2002 to March 2004 due to delays and denials in the processing of business visas. Beijing’s Deputy Chief of Mission and consular officials at the embassy and consulates in China also stated that visa delays could have a negative impact on student and scholar exchanges. Visa delays are a longstanding problem. However, since September 2001, several factors have exacerbated wait times for visas. First, changes to visa policies and procedures have resulted in additional workload for consular officers. Second, while not reaching pre-2001 levels, visa application volume has increased in recent years. Third, many posts face facility constraints, which limit the extent to which posts can increase visa processing. Finally, staffing shortfalls also affect the length of time that applicants must wait for a visa. Since the September 11 attacks, Congress, State, and DHS have initiated a series of changes to policies and procedures designed to enhance border security. These changes have added to the complexity of consular officers’ workload and, in turn, exacerbated State’s resource constraints. These changes include the following: Consular officers must interview virtually all visa applicants; prior to August 2003, they could routinely waive interviews. Since October 2004, consular officers are required to scan foreign nationals’ right and left index fingers and clear the fingerprints through the DHS Automated Biometric Identification System before an applicant can receive a visa. Some responsibilities previously delegated to Foreign Service nationals and consular associates have been transferred to consular officers. For example, consular associates are no longer authorized to adjudicate visas. As previously mentioned, some applicants have faced additional delays due to various special security checks, or Security Advisory Opinions. For example, foreign science students and scholars, who may pose a threat to our national security by illegally transferring sensitive technology, may be subject to security checks known as Visas Mantis. In the spring of 2003, it took an average of 67 days for Visas Mantis checks to be processed and for State to notify consular posts of the results. Since then, State and other agencies have taken actions which have reduced delays to about 15 days for these checks. In addition, on July 13, 2005, the Secretary of Homeland Security announced that the U.S. government had adopted a 10-print standard for biometric collection for visas. In January 2006, the director of the U.S. Visitor and Immigrant Status Indicator Technology program testified that moving to a 10-fingerscan standard from a 2-print standard would allow the United States to be able to identify visa applicants and visitors with even greater accuracy. In February 2006, State reported that it plans to complete pilot testing and procurement of the 10-print equipment to ensure that all visa-issuing posts have collection capability by the end of fiscal year 2007. Requiring applicants to submit 10-prints could add more time to the applicant’s interview and potentially delay visa processing. To help mitigate the adverse impact of these policy and procedural changes on wait times, State has taken actions to help maintain the right balance between promoting security and facilitating travel. For example, while we have not assessed the impact of these actions, all overseas posts have established procedures to expedite the processing of business visas and are working closely with local American Chambers of Commerce in more than 100 countries to expedite the visa process for bona fide business travelers. In July 2005, State also established a Business Visa Center to facilitate visa application procedures for U.S. businesses in conjunction with upcoming travel or events. Regarding foreign students, in February 2006, State announced that it has extended the length of time foreign students may be issued student visas, which will allow some students to apply up to 120 days before their academic program start date (as compared to 90 days under previous regulations). According to State, U.S. embassies and consulates also have established special, expedited visa interviews for prospective foreign students. While not returning to levels prior to the September 11 attacks, visa issuance rates increased in fiscal years 2004 and 2005, according to State’s data (see fig. 1). Should application volume continue to increase, State has acknowledged that additional management actions will be necessary to ensure that visa applications are processed in a timely manner. In the future, we believe that increased global trade and economic growth will likely result in increased demand for visas, particularly in certain countries. Embassy facilities at some posts limit the number of visa applications that are processed each day and make it difficult to keep up with visa demand. In our September 2005 report, we noted that many visa chiefs we interviewed reported problems with their facilities. For example, at 14 of the 25 posts covered in our survey, consular officials rated their workspace as below average, and 40 percent reported that applicants’ waiting rooms were below average. In addition, due to overcrowded waiting rooms at four of the eight posts we visited, we observed visa applicants waiting for their interviews outside or in adjacent hallways. Moreover, a limited number of security guards and screening devices, as well as limited physical space, often create bottlenecks at the facilities’ security checkpoints. In March 2006, we observed visa facilities in Paris, France, and noted that there are insufficient adjudicating windows to meet visa demand. A senior consular official acknowledged that many consular facilities are located in run-down buildings with insufficient adjudicating windows and waiting rooms. In fiscal year 2003, Congress directed the Overseas Building Operations Bureau to begin a 3-year Consular Workspace Improvement Initiative to improve the overall working environment for consular officers. In fiscal years 2003 and 2004, State obligated $10.2 million to 79 workspace improvement projects at 68 posts. However, according to a senior consular official, these funds are being used to provide temporary solutions at posts that may require a new embassy as part of State’s multibillion-dollar embassy construction program. It may take years before some posts’ facilities needs are fully addressed. To have sufficient resources to manage the demand for visas and minimize the time applicants must wait, State may need to consider establishing new visa-issuing posts. Indeed, in its 2005 inspection of the Embassy in New Delhi, for example, the Office of the Inspector General stated that State should establish a permanent consulate in Hyderabad, India, by no later than 2008 in light of the need for expanded visa processing facilities due to increased application volume. In March 2006, the President announced that the United States would open a new consulate; however, it is unclear when this may happen. In September 2005, we reported that State faced staffing shortfalls in consular positions—a key factor affecting the effectiveness of the visa process and the length of time applicants must wait for visas. As of April 30, 2005, we found that 26 percent of midlevel consular positions were either vacant or filled by an entry-level officer. In addition, almost three- quarters of the vacant positions were at the FS-03 level—midlevel officers who generally supervise entry-level staff. Consular officials attribute this shortfall to low hiring levels prior to the Diplomatic Readiness Initiative and the necessary expansion of entry-level positions to accommodate increasing workload requirements after September 11, 2001. We believe experienced supervision at visa-issuing posts is important to avoiding visa delays. For example, experienced officers may provide guidance to entry- level officers on ways to expedite visa processing, including advising staff on when special security checks are required. During our February 2005 visits to Riyadh and Jeddah, Saudi Arabia, and Cairo, Egypt, we observed that the consular sections were staffed with entry-level officers on their first assignment with no permanent midlevel visa chief to provide supervision and guidance. Although these posts had other mid- or seniorlevel consular officers, their availability on visa issues was limited because of their additional responsibilities. For example, the head of the visa section in Jeddah was responsible for managing the entire section, as well as services for American citizens due to a midlevel vacancy in that position. At the time of our visit, the Riyadh Embassy did not have a midlevel visa chief. Similarly, in Cairo, there was no permanent midlevel supervisor between the winter of 2004 and the summer of 2005, and Consular Affairs used five temporary staff on a rotating basis during this period to serve in this capacity. Entry-level officers we spoke with stated that due to the constant turnover, the temporary supervisors were unable to assist them adequately. At the U.S. consulate in Jeddah, entry- level officers expressed concern about the lack of a midlevel supervisor. More recently, during a February 2006 visits to posts in Nigeria and China, we found similar consular vacancies. For example, first tour, entry-level officers in Chengdu and Shenyang, China, are filling midlevel consular positions. We have reported on numerous occasions that factors such as staffing shortages have contributed to long wait times for visas at some posts. Since 2002, State has received funding to address these shortfalls. Through the Diplomatic Readiness Initiative and other sources, State increased the number of Foreign Service officer consular positions by 364, from 1,037 in fiscal year 2002 to 1,401 in fiscal year 2005. However, while we have not studied this issue, the disparity in wait times among posts may indicate the need to reallocate positions to address the growing consular demand and long wait times at some posts. In the event of staffing shortfalls, State has mechanisms for requesting increased staff resources. For example, if the Consular Affairs Bureau identifies a need for additional staff in headquarters or overseas, it may request that the Human Resources Bureau establish new positions. In addition, posts can also describe their needs for additional positions through their consular package—a report submitted annually to the Consular Affairs Bureau that details workload statistics and staffing requirements, among other things. For example, in December 2004, during the course of our work, the consular section in Riyadh reported to Washington that there was an immediate need to create a midlevel visa chief position at post, and consular officials worked with human resource officials to create this position, which, according to State officials, would be filled by summer 2005. State’s current assignment process does not guarantee that all authorized positions will be filled, particularly at hardship posts. Historically, State has rarely directed its employees to serve in locations for which they have not bid on a position, including hardship posts or locations of strategic importance to the United States, due to concerns that such staff may be more apt to have poor morale or be less productive. Due to State’s decision to not force assignments, along with the limited amount of midlevel officers available to apply for them, important positions may remain vacant. According to a deputy assistant secretary for human resources, Consular Affairs can prioritize those positions that require immediate staffing to ensure that officers are assigned to fill critical staffing gaps. For example, Consular Affairs could choose not to advertise certain positions of lesser priority during an annual assignment cycle. However, senior Consular Affairs officials acknowledged that they rarely do this. According to these officials, Consular Affairs does not have direct control over the filling of all consular positions and can often face resistance from regional bureaus and chiefs of mission overseas who do not want vacancies at their posts. Thus, as we have previously reported, certain high-priority positions may not be filled if Foreign Service officers do not bid on them. In commenting on a draft of our September 2005 report, State disagreed with our recommendation that it prepare a comprehensive plan to address vulnerabilities in consular staffing. State argued that it already had such a plan. Moreover, State claimed that it appreciates that priority positions must be filled worldwide based on the relative strategic importance of posts and positions. While State argued that every visa consular officer is serving a strategic function, the department identified one post, Embassy Baghdad, as a clear example of a priority post. Further, State acknowledged that it has fewer midlevel consular officers than it needs. We continue to believe it is incumbent on the department to conduct a worldwide analysis to identify high-priority posts and positions, such as supervisory consular positions in posts with high-risk applicant pools or those with high workloads and long wait times for applicant interviews. Although State noted that it anticipated addressing this shortage of midlevel consular officers, it did not indicate when that gap would be filled. On January 18, 2006, the Secretary of State announced the department’s plan to restructure overseas and domestic staffing. This plan aims to shift U.S. diplomatic personnel from European posts and headquarters offices to posts in Africa, South Asia, the Middle East, and elsewhere. While we have not conducted a comprehensive review of this initiative, only midlevel political, economic, and public diplomacy officers, and not consular officers, would comprise the initial realignment of 100 positions, according to State officials. In February 2006, consular officials told us that, since our report, they concluded a review of consular position grades to ensure that they reflect the work requirements for each consular position. Based on this analysis, consular officials recommended that 47 positions be upgraded—from an entry- to midlevel position, for example—to reconcile the management structures of posts that have undergone rapid growth. However, State’s bidding and assignment process does not guarantee that the positions of highest priority will always be filled with qualified officers. Therefore, a further assessment is needed to ensure that State has determined its staffing requirements and placed the right people in the right posts with the necessary skill levels. The visa process presents a balance between facilitating legitimate travel and identifying those who might harm the United States. State, in coordination with other agencies, has made substantial improvements to the visa process to strengthen it as a national security tool. However, given the large responsibility placed on consular officers, particularly entry-level officers, it is critical to provide consular posts with the resources necessary for them to be effective. Indeed, extensive delays for visa interview appointments point to the need for State to perform a rigorous assessment of staffing requirements to achieve its goal of having the right people with the right skills in the right places. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions you or Members of the Committee may have. For questions regarding this testimony, please call Jess T. Ford, (202) 512-4128 or fordj@gao.gov. Individuals making key contributions to this statement include John Brummet, Assistant Director, and Kathryn Bernet, Eugene Beye, Joseph Carney, and Jane Kim. Border Security: Strengthened Visa Process Would Benefit From Improvements in Staffing and Information Sharing. GAO-05-859. September 13, 2005. Border Security: Actions Needed to Strengthen Management of Department of Homeland Security’s Visa Security Program. GAO-05-801. July 29, 2005. Border Security: Streamlined Visas Mantis Program Has Lowered Burden on Foreign Science Students and Scholars, but Further Refinements Needed. GAO-05-198. February 18, 2005. Border Security: State Department Rollout of Biometric Visas on Schedule, but Guidance Is Lagging. GAO-04-1001. September 9, 2004. Border Security: Additional Actions Needed to Eliminate Weaknesses in the Visa Revocation Process. GAO-04-795. July 13, 2004. Visa Operations at U.S. Posts in Canada. GAO-04-708R. May 18, 2004. Border Security: Improvements Needed to Reduce Time Taken to Adjudicate Visas for Science Students and Scholars. GAO-04-371. February 25, 2004. State Department: Targets for Hiring, Filling Vacancies Overseas Being Met but Gaps Remain in Hard-to-Learn Languages. GAO-04-139. November 19, 2003. Border Security: New Policies and Procedures Are Needed to Fill Gaps in the Visa Revocation Process. GAO-03-798. June 18, 2003. Border Security: Implications of Eliminating the Visa Waiver Program. GAO-03-38. November 22, 2002. Technology Assessment: Using Biometrics for Border Security. GAO-03- 174. November 15, 2002. Border Security: Visa Process Should Be Strengthened as an Antiterrorism Tool. GAO-03-132NI. October 21, 2002. State Department: Staffing Shortfalls and Ineffective Assignment System Compromise Diplomatic Readiness at Hardship Posts. GAO-02-626. June 18, 2002. State Department: Tourist Visa Processing Backlogs Persist and U.S. Consulates. GAO/NSIAD-98-69. March 13, 1998. State Department: Backlogs of Tourist Visas at U.S. Consulates. GAO/NSIAD-92-185. April 30, 1992. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In deciding to approve or deny a visa application, the Department of State's (State) consular officers are on the front line of defense in protecting the United States against those who seek to harm U.S. interests. To increase border security following the September 11 attacks, Congress, State, and the Department of Homeland Security initiated a series of changes to border security policies and procedures. These changes have added to the complexity of consular workload. But consular officers must balance this security responsibility against the need to facilitate legitimate travel. In recent years, GAO has issued a series of reports on the visa process. This statement discusses (1) wait times for visas, (2) factors that affect wait times, and (3) GAO's recent work on consular staffing. As a result of changes since September 11, 2001, aimed at strengthening visa policies and procedures, applicants have faced extensive wait times for visas at some posts. According to consular officials, posts that consistently have wait times of 30 days or longer for interview appointments may have a resource problem. During a recent 6-month period, 97 of State's 211 visa-issuing posts reported maximum wait times of 30 or more days in at least one month; at 20 posts, the reported wait times were in excess of 30 days for this entire 6-month period. Further, in February 2006, 9 posts reported wait times in excess of 90 days. Several factors have contributed to these delays at some consular posts. For example, Congress, State, and the Department of Homeland Security have initiated new policies and procedures since the September 11 attacks to strengthen the security of the visa process; however, these new requirements have increased consular workload and exacerbated delays. Additionally, some applicants have faced additional delays because of special security checks for national security concerns. Other factors, such as resurgence in visa demand and ongoing embassy facility limitations, could continue to affect wait times. We recently reported that State had not conducted a worldwide, comprehensive assessment of staffing requirements for visa operations. While State has increased hiring of consular officers, there is a need for such an assessment to ensure that State has sufficient staff at key consular posts, particularly in light of the visa processing delays at some posts.
According to Census data, in 2005 an estimated 21.9 million households, or 20 percent of the 111.1 million households nationwide, were “veteran households”—that is, they had at least one member who was a military veteran. As figure 1 shows, most veteran households—about 80 percent— owned their own homes, a significantly higher percentage than was the case for other (nonveteran) households. Census data also show that renter households were more likely to be low income than were owner-occupied households. In 2005, an estimated 36.8 million households nationwide rented homes, including about 4.3 million veteran households. Approximately 66 percent of renter households were low income; in contrast, 32 percent of homeowners were low income. Many of these households must rent because they lack sufficient income and savings to purchase a home. Furthermore, studies by HUD and others have noted the difficulties many renters face in finding a place with affordable rents because growth in household incomes has not kept pace with rising rents in many markets. VA, through a variety of programs, provides federal assistance to veterans who are homeless, and also provides homeownership assistance, but does not provide rental assistance. One of the agency’s largest programs for homeless veterans is the Homeless Providers Grant and Per Diem (GPD) program, which provides funding to nonprofit and public agencies to help temporarily shelter veterans. GPD funding can be used for purposes such as paying for the construction or renovation of transitional housing and reimbursing local agencies for operating the program. In fiscal year 2005, the GPD program spent about $67 million and had about 8,000 beds that were available to homeless veterans. VA also administers eight other programs for outreach and treatment of homeless veterans. In addition to its homelessness programs, VA provides a variety of programs, services, and benefits to veterans and their families. Included among them are pension payments, disability payments, health care services, training and education allowances, and burial expenses. The VA assists veterans in becoming homeowners through its Home Loan Guaranty program, which offers mortgages with favorable terms, including no down payment, limitations on closing costs, no private mortgage insurance, and easier credit standards to qualify for a loan. HUD provides rental housing assistance through three major programs— housing choice voucher, public housing, and project-based. In fiscal year 2005, these programs provided rental assistance to about 4.8 million households and paid about $28 billion in rental subsidies. These three programs generally serve low-income households—that is, households with incomes less than or equal to 80 percent of AMI. Most of these programs have targets for households with extremely low incomes— 30 percent or less of AMI. HUD-assisted households generally pay 30 percent of their monthly income, after certain adjustments, toward their unit’s rent. HUD pays the difference between the household’s contribution and the unit’s rent (under the voucher and project-based programs) and the difference between the PHAs’ operating costs and rental receipts for public housing. The housing choice voucher program provides vouchers that eligible families can use to rent houses or apartments in the private housing market. Voucher holders are responsible for finding suitable housing, which must meet HUD’s housing quality standards. The subsidies in the voucher program are connected to the household (that is, tenant-based), so tenants can use the vouchers in new residences if they move. The approximately 2,500 PHAs that administer the voucher program are responsible for ensuring that tenants meet program eligibility requirements and that tenant subsidies are calculated properly. PHAs also are required to develop written policies and procedures to administer the program consistently with HUD regulations. The public housing program subsidizes the development, operation, and modernization of government-owned properties and provides units for eligible tenants in these properties. In contrast to the voucher program, the subsidies in the public housing program are connected to specific rental units (that is, project-based), so tenants receive assistance only when they live in these units. Approximately 3,300 PHAs manage the public housing program on behalf of HUD. PHAs are responsible for ensuring tenant eligibility for the program, properly calculating tenant subsidies, and ensuring that their policies and procedures conform to HUD regulations. Finally, through a variety of project-based programs, HUD provides rent subsidies in the form of multiyear housing assistance payments to private property owners and managers on behalf of eligible tenants. Tenants may apply for admission to these properties with project-based rental assistance contracts. About 22,000 property owners and managers currently participate in the programs and, similar to PHAs, must ensure tenants meet eligibility requirements, calculate subsidies correctly, and develop administrative policies and procedures that are consistent with HUD regulations. For most of these project-based properties, HUD contracts with PBCAs—typically state and local housing agencies—to oversee property management and process requests for payments from property owners. The PBCAs are also responsible for conducting annual management and occupancy reviews, which include reviewing property owners’ tenant selection plans. HUD rental assistance programs are not entitlements, and as a result, the amount of funding HUD requests and Congress provides annually limits the number of households that these programs can assist. Historically, funding for these programs has not been sufficient to assist all eligible households. Because the demand for rental assistance outstrips available resources, many PHAs and property owners have waiting lists of applicants seeking rental assistance. PHAs and property owners can use a system of preferences for giving certain populations—such as the elderly, veterans, or the homeless—priority in receiving assistance as units or vouchers become available. In addition to rental assistance, HUD funds a limited number of supportive services programs. The programs offer counseling, education and job training, mental health services, transportation, and child care, among other services. Generally, PHAs and property owners must apply for funding under these programs. Supportive services not funded by HUD can be made available through partnerships between individual properties, local organizations, and other federal agency programs. HUD administers other programs that help low-income households, including eligible veteran households, obtain access to affordable rental housing. Our review did not focus on these programs because they make up a relatively small percentage of HUD’s funding when compared with the three major rental assistance programs. Further, they are not solely rental assistance programs, but rather serve multiple purposes; for example, the HOME Investment Partnerships Program (HOME) provides formula grants to states and localities to build, acquire, and rehabilitate affordable housing for rent or homeownership. In addition, other federal agencies administer programs that provide forms of rental assistance to eligible populations, such as the Internal Revenue Service’s (IRS) Low- Income Housing Tax Credit program and U.S. Department of Agriculture’s (USDA) Rural Housing Service programs. The tax credit program funds the development of rental units that are restricted to low-income households for a number of years, while USDA’s programs (which are small relative to HUD’s programs) fund the development of low-income rental units or subsidize rents in rural areas. Based on our analysis of ACS data, an estimated 2.3 million veteran renter households had low incomes in 2005. The numbers of low-income veteran renter households varied considerably by state, as did the percentages of veteran renter households that were low income. In terms of demographic characteristics, we found that a significant proportion of low-income veteran renter households had a veteran member who was elderly or had a disability. In addition, about 56 percent of low-income veteran renter households had problems affording their rents—that is, their housing costs exceeded 30 percent of household income. Finally, a small percentage of low-income veteran renters lived in overcrowded or inadequate housing. According to our analysis of ACS data, of the 4.3 million veteran households that rented their homes, an estimated 2.3 million, or about 53 percent were low income in 2005. As shown in table 1, the largest share of these 2.3 million households was concentrated in the highest low-income category—that is, 50.1 to 80 percent of AMI—with somewhat small shares in the two lower categories. The table also shows that other renter households (that is, households without a veteran member) were even more likely to be low income than veteran renter households. Specifically, an estimated 22 million, or 68 percent, of the 32.5 million other renter households were low income. Further, the largest share of the 22 million households was concentrated in the lowest income category—that is, 30 percent or less of AMI. The estimated numbers of low-income veteran renter households in 2005 varied greatly by state, as shown in figure 2. The estimated median number of low-income veteran renters in any state was about 34,000. California had significantly more low-income veteran renter households than any other state—more than 236,000, or about 10 percent of all such households nationwide—followed by Texas with about 142,000, and New York with about 135,000. The states with the smallest number of low-income veteran households were Vermont, Delaware, and Wyoming with less than 6,000 each. As shown in figure 3, the percentages of veteran renter households that were low income in 2005 also varied considerably by state. Michigan had the highest percentage—about 65 percent of its veteran renter households were low income, while Virginia had the lowest—about 41 percent. Table 8 in appendix II contains more detailed information about the number and percentages of low-income veteran renters in each state and the District of Columbia. Households with at least one veteran member who was elderly (that is, 62 years of age or older) or had a disability constituted a significant share of all low-income veteran renter households in 2005. Specifically, of the 2.3 million low-income veteran renter households, an estimated 816,000 (36 percent) had a member who was elderly. As shown in table 2, the incomes of these elderly veteran households generally were distributed fairly evenly across the three low-income categories. In comparison, other (nonveteran) low-income households had a lower percentage of elderly households. About 4 million (18 percent) of the 22 million other low-income renter households were elderly, with most of their income concentrated in the lowest income category. In 2005, an estimated 887,000, or 39 percent, of low-income veteran renter households had at least one veteran member with a disability. Similar to the elderly veteran renter households, the incomes of these households generally were distributed evenly across the different low-income categories (see table 3). In comparison, an estimated 6.8 million, or 31 percent, of other low-income households had a member with a disability. In marked contrast to veteran renter households with a disability, other such renters had household incomes that were considerably more concentrated in the lowest income category. In addition to the elderly and disability status of veteran households, we also analyzed information on selected other demographic characteristics—including race and ethnicity—of low-income veteran renter households nationally and at the state level. We include these results in appendix II. According to our analysis of ACS data, an estimated 1.3 million low- income veteran households, or about 56 percent of the 2.3 million such households, had rents that exceeded 30 percent of their household income in 2005 (see table 4). These veteran renter households had what HUD terms “moderate” or “severe” problems affording their rent. Specifically, about 31 percent of low-income veteran renter households had moderate affordability problems, and about 26 percent had severe affordability problems. The remainder either paid 30 percent or less of their household income in rent, reported zero income, or did not pay cash rent. In comparison, a higher proportion of other low-income renter households had moderate or severe housing affordability problems. Specifically, of the 22 million other low-income renter households, an estimated 13.9 million, or about 63 percent, had a housing affordability problem, with these households somewhat evenly distributed between those with moderate and severe affordability problems. The extent of housing affordability problems among low-income veteran renter households varied significantly by state in 2005 (see fig. 4). The median percentage of low-income veteran renters with affordability problems nationwide was 54 percent. California and Nevada had the highest proportions of affordability problems among low-income veteran renter households—about 68 and 70 percent, respectively. North Dakota and Nebraska had the smallest—about 37 and 41 percent, respectively. Table 9 in appendix II contains detailed information on the percentage of low-income veterans with affordability problems by state. A relatively small percentage of veteran households lived in overcrowded or substandard housing in 2005. Specifically, an estimated 73,000, or 3 percent, of low-income veteran renter households lived in overcrowded housing—housing with more than one person per room—and less than 18,000, or about 1 percent, lived in severely overcrowded housing— housing with more than one and a half persons per room. In contrast, an estimated 1.5 million, or 7 percent, of other low-income renter households lived in overcrowded housing, and about 423,000, or 2 percent, lived in severely overcrowded housing. Finally, ACS data indicate that a very small share of low-income veteran renters lived in inadequate housing. ACS provides very limited information about the quality of the housing unit; the survey classifies a unit as inadequate if it lacks complete plumbing or kitchen facilities, or both. In 2005, an estimated 53,000, or 2 percent, of low-income veteran renter households lived in inadequate housing. In comparison, an estimated 334,000, or 2 percent, of other households lived in inadequate housing. HUD’s rental assistance programs do not take veteran status into account when determining eligibility or calculating subsidy amounts, and HUD does not collect any information identifying whether assisted households have members who are veterans. Veterans can participate in these programs if they meet eligibility requirements. Further, HUD policies generally do not distinguish between income sources that are specific to veterans, such as VA-provided benefits, and other sources of income. Instead, HUD takes into account the type of income, such as whether it is recurring or not. When calculating applicants’ incomes, we found that HUD excludes most types of income and benefits that veterans may receive from VA, with the exception of recurring income, such as veterans’ pension, disability payments, and survivor benefits. Although HUD’s major programs do not take veteran status into account for determining eligibility and subsidy amount, HUD allocated almost 1,800 vouchers that were specifically targeted to formerly homeless veterans in the early 1990s, but the number of vouchers in use has been declining. HUD’s major rental assistance programs are not required to take a household’s veteran status into account when determining eligibility and calculating subsidy amounts. Consequently, HUD does not collect any information that identifies the veteran status of assisted households. As with other households, veterans can benefit from HUD rental assistance provided that they meet all of the programs’ income and other eligibility criteria. For example, assisted households must meet U.S. citizenship requirements and, for some of the rental assistance programs, HUD’s criteria for an elderly household or a household with a disability. In addition to rental assistance, HUD makes available limited supportive services to some assisted households, typically through separate programs, but like rental assistance, none of these supportive services programs take veteran status into account when determining eligibility. An example is HUD’s Multifamily Housing Service Coordinator grant program, which pays for coordinators to assist residents (at properties designated for the elderly and persons with disabilities) in obtaining supportive services from community agencies. (See table 11 in app. III for a description of other programs through which HUD makes supportive services available.) While the programs disregard veteran status, they may provide services to veterans who receive HUD rental assistance. HUD does not collect information identifying veteran households that its supportive services programs serve, but agency officials stated that HUD’s supportive services programs likely assist a small number of veterans because the programs serve a relatively small percentage of all assisted households. When determining income eligibility and subsidy amounts, HUD generally does not distinguish between income sources that are specific to veterans, such as VA-provided benefits, and other types of income. HUD policies define household income as the anticipated gross annual income of the household, which includes income from all sources received by the family head, spouse, and each additional family member who is 18 years of age or older. Specifically, annual income includes, but is not limited to, wages and salaries, periodic amounts from pensions or death benefits, and unemployment and disability compensation. HUD policies identify 39 separate income sources and benefits that are excluded when determining eligibility and subsidy amounts. These exclusions relate to income that is nonrecurring or sporadic in nature, health care benefits, student financial aid, and assistance from certain employment training and economic self- sufficiency programs. We found that, based on HUD’s policies on income exclusions, most types of income and benefits that veteran households receive from VA would be excluded when determining eligibility for HUD’s programs and subsidy amounts. (See table 12 in app. IV for a detailed listing of these benefits). Many of the excluded benefits relate to payments that veteran households receive under certain economic self-sufficiency programs or nonrecurring payments such as insurance claims. Of the benefits included, most are associated with recurring or regular sources of income, such as disability compensation, pensions, and survivor death benefits. Of the 39 exclusions, we found that two income exclusions specifically applied to certain veteran households but, according to HUD, these exclusions are rarely used. These income exclusions are (1) payments made to Vietnam War-era veterans from the Agent Orange Settlement Fund and (2) payments to children of Vietnam War-era veterans who suffer from spina bifida. The two exclusions are identified in federal statutes that are separate from those authorizing the three major rental assistance programs. Under the Housing and Urban Development-Veterans Affairs Supportive Housing program (HUD-VASH), HUD provides rental assistance vouchers specifically to veterans, but the number of veterans served is extremely small and has been declining in recent years. Established in 1992, HUD- VASH is jointly funded by HUD and VA and offers formerly homeless veterans an opportunity to obtain permanent housing, as well as ongoing case management and supportive services. HUD allocated these special vouchers to selected PHAs that had applied for funding, and VA was responsible for identifying participants based on specific eligibility criteria, including the veteran’s need for treatment of a mental illness or substance abuse disorder. After selecting eligible veterans, VA and the PHA worked together to help the veterans use the vouchers to rent suitable housing, and VA provided ongoing case management, health, and other supportive services. Under the HUD-VASH initiative, HUD allocated 1,753 vouchers from fiscal years 1992 through 1994. HUD funded these vouchers for 5 years and, if a veteran left the program during this period, the PHA had to reissue the voucher to another eligible veteran. VA officials stated that, after the 5-year period ended, PHAs had the option of continuing to use their allocation of vouchers for HUD-VASH, or could discontinue participation whenever a veteran left the program (that is, the PHA would not provide the voucher to another eligible veteran upon turnover). According to VA and HUD officials, after the 5-year period ended, many PHAs decided not to continue in HUD-VASH after assisted veterans left the program; instead, PHAs exercised the option of providing these vouchers to other households under the housing choice voucher program. As a result, the number of veterans that receive HUD-VASH vouchers has declined. Based on VA data, about 1,000 veterans were in the program as of the end of fiscal year 2006, and this number is likely to decline. Specifically, VA officials estimated that the number of veterans served could drop to 400 because PHAs responsible for more than 600 vouchers have decided not to continue providing these vouchers to other veterans as existing participants leave the program. Congress permanently authorized HUD-VASH as part of the Homeless Veterans Comprehensive Assistance Act of 2001. Under the act, Congress also authorized HUD to allocate 500 vouchers each fiscal year from 2003 through 2006—a total of 2,000 additional vouchers. In December 2006, Congress extended this authorization through fiscal year 2011—allocating a total of 2,500 vouchers or 500 each year. However, HUD has not requested, and Congress has not appropriated, funds for any of the vouchers authorized from fiscal years 2003 through 2007. Less than half of the 41 largest PHAs we contacted employed a veterans’ preference for admission to their public housing or voucher programs, while the 13 largest PBCAs we contacted reported that owners of project- based properties that they oversee generally did not use a veterans’ preference. HUD allows, but does not require, PHAs and property owners to establish preferences to better direct resources to families with the greatest housing needs in their area. HUD does not aggregate information on the extent to which PHAs and property owners use preferences. Our review showed that 29 of the 34 largest PHAs that administered public housing programs in fiscal year 2006 offered preferences and, of these, 14 offered a veterans’ preference. Similarly, 34 of the 40 largest PHAs that administered the housing choice voucher program in fiscal year 2006 offered preferences and, of these, 13 offered a veterans’ preference. Finally, officials from the 13 largest PBCAs told us that, in their experience, owners of project-based properties that they oversee generally did not employ a veterans’ preference when selecting tenants. Currently, HUD’s policies give PHAs and owners of project-based properties the discretion to establish preferences for certain groups when selecting households for housing assistance. Preferences affect only the order of applicants on a waiting list for assistance; they do not determine eligibility for housing assistance. Before 1998, federal law required PHAs and property owners to offer a preference to eligible applicants to their subsidized housing programs who (1) had been involuntarily displaced, (2) were living in substandard housing, or (3) were paying more than half their income for rent. PHAs were required by law to allocate at least 50 percent of their public housing units and 90 percent of their housing choice vouchers to applicants who met these criteria. Similarly, project- based owners had to allocate 70 percent of their units to newly admitted households that met these criteria. The Quality Housing and Work Responsibility Act of 1998 (QHWRA) gave more flexibility to PHAs and project-based property owners to administer their programs, in part by eliminating the mandated housing preferences. Although it gave PHAs and owners more flexibility, QHWRA required that PHAs and owners target assistance to extremely low-income households. Under QHWRA, PHAs and owners of project-based properties may, but are not required to, establish preferences to better direct resources to those with the greatest housing needs in their areas. PHAs can select applicants on the basis of local preferences provided that their process is consistent with their administrative plan. HUD policy requires PHAs to specify their preferences in their administrative plans, and HUD reviews these preferences to ensure that they conform to nondiscrimination and equal employment opportunity requirements. Similarly, HUD policy allows owners of project-based properties to establish preferences as long as the preferences are specified in their written tenant selection plans. While HUD requires PHAs and property owners to disclose their preferences in their administrative or tenant selection plans, HUD officials said the department does not compile or systematically track this information because PHAs and property owners are not required to have preferences. However, HUD may examine the use of preferences as part of specific studies or reports. For example, HUD discussed the use of preferences by PHAs in its November 2000 report on the use of discretionary authority in the housing choice voucher program. HUD reported that about 71 percent of the 1,684 PHAs that were reviewed used admission preferences for the housing choice voucher program. Further, the study also found that PHAs offered need-based preferences, as well as other local preferences, including those for households achieving self- sufficiency, but the report did not discuss whether the PHAs used a veterans’ preference. While HUD’s policies give PHAs the discretion to establish preferences for certain groups when selecting households (including those with veterans) for housing assistance, recent proposed legislation would develop and expand permanent housing opportunities for very low-income veterans. Specifically, legislation introduced in the Senate requires that, among other things, PHAs and states and localities include veterans as a special needs population in their PHA plans and comprehensive housing affordability strategies. Most of the 41 PHAs we contacted used a preference system for admission to their public housing and housing choice voucher programs, but less than half offered a veterans’ preference. As shown in table 5, of the 34 largest PHAs that administered the public housing program, 29 established preferences for admission to the program and 14 used a veterans’ preference. Similarly, of the 40 PHAs that administered the housing choice voucher program, 34 used admission preferences, and 13 employed a preference for veterans. According to PHA officials, the most common preferences used for both programs were for working families, individuals who were unable to work because of age or disability, and individuals who had been involuntarily displaced or were homeless. Of course, veterans could benefit from these admission preferences if they met the criteria. Some of the PHAs we contacted offered a veterans’ preference because their states required them to do so. Other PHA officials told us they offered a veterans’ preference because they believed it was important to serve the needs of low-income veterans since they had done so much for the well-being of others. PHAs that we contacted that did not offer a veterans’ preference gave various reasons for their decisions. Some officials told us that the PHA did not need a veterans’ preference because veteran applicants generally qualified under other preference categories, such as elderly or disabled. One PHA official we contacted said a veterans’ preference was not needed because of the relatively small number of veterans in the community. Because PHAs can employ multiple preferences, many of the PHAs that have a preference system weight or rank the preferences they use—that is, they give greater weight to an applicant who falls within a particular category—to determine position on the waiting list. Almost two-thirds of the PHAs we contacted that administer a preference system for their public housing programs weight or rank preferences. Nevertheless, only four of these weighted systems allow for veterans to receive priority over other populations who received other preferences. Similarly, a little more than half of the PHAs who use preferences for their housing choice voucher programs weighted or ranked preferences. But only three of these PHAs gave priority to veterans over other populations that also were eligible to receive a preference. The remaining PHAs that have a preference system for their public housing or housing choice voucher programs told us that they either assigned equal weight to the preferences they offered, or used date and time or a lottery system to determine the order in which they selected applicants from waiting lists. In a 2004 examination of PHAs’ waiting lists, the National Low Income Housing Coalition found that more than three-quarters of the agencies that it reviewed used preferences for specific categories of applicants to order waiting lists for their public housing and housing choice voucher programs. In addition, the study found that less than one-quarter of the agencies used a veterans’ preference to determine the order of their waiting lists. Specifically, a little less than 25 percent of the PHAs that administered a public housing program had a veterans’ preference, while 20 percent of the PHAs that ran housing choice voucher programs used such a preference. Furthermore, the study found that PHAs most commonly gave preferences to applicants who were employed, involuntarily displaced from previous housing, victims of domestic violence, or residents of the PHA’s jurisdiction. According to all of the PBCAs we contacted, owners of project-based properties that they oversee generally did not employ a veterans’ preference when selecting tenants. Ten of the 13 largest PBCAs told us, based on their review of property owners’ tenant selection plans, that owners of project-based properties generally did not employ preferences for any specific population. Officials from the remaining three PBCAs said they were aware of some property owners offering preferences to individuals who had been involuntarily displaced, working families, or those unable to work because of age or disability. However, all the PBCAs we contacted either said that property owners did not use preferences or agreed that the use of preferences, including a veterans’ preference, among owners of properties with project-based assistance was limited. HUD officials to whom we spoke also stated, based on their experience with tenant selection plans, that the use of preferences at project-based properties likely was infrequent. Although most PBCAs stated that property owners did not generally employ preferences, the use of such preferences can vary significantly even within one PBCA’s portfolio of properties. For example, a PBCA official said that the demand for subsidized housing can influence whether owners use preferences. Properties in communities with a high demand for subsidized housing may need to establish preferences to manage waiting lists, and those in communities with low demand may not need to use preferences. Our analysis of ACS, HUD, and VA data shows that, in 2005, low-income veteran renter households were less likely to receive rental assistance than other low-income households. An estimated 11 percent of all low-income veteran renter households received HUD rental assistance, compared with 19 percent of other low-income households. Although the reasons for this difference are unclear, various factors—such as different levels of need for affordable housing among veteran and other households—could contribute to the disparity. In 2005, at least 250,000 low-income veteran households received rental assistance under HUD’s programs— representing about 6 percent of all households that received such assistance. The demographic characteristics of these veteran-assisted households differed somewhat from those of other (nonveteran) assisted households; for example, veteran-assisted households were more likely to have a disability compared with other assisted households. Low-income veteran renter households were less likely to receive HUD rental assistance than other households. As shown in table 6, of the total 2.3 million veteran renter households with low incomes, at least 250,000 (or 11 percent) received HUD assistance. In comparison, of the 22 million other renter households with low incomes, 4.1 million (about 19 percent) received HUD assistance. (As noted previously, although HUD is the largest provider of federal rental housing assistance to low-income households, it is not the sole source of such assistance. Thus, these percentages likely understate the actual share of all eligible veteran renter households that receive federal rental assistance.) The reasons why other households were nearly twice as likely as veteran households to receive HUD assistance are unclear. But, based on our analyses and discussions with agency officials, some potential explanations include (1) differences in the extent of housing needs between veteran and other households, (2) infrequent use of a veterans’ preference by PHAs and property owners, and (3) statutory requirements for targeting extremely low-income households. First, as discussed earlier in this report, although a significant proportion of low-income veteran households face affordability problems, an even larger proportion of other (nonveteran) households face more severe affordability problems. Thus, the level of veteran demand for rental assistance may be lower than that of nonveteran households. Second, and again as discussed earlier in this report, HUD rental assistance programs do not take veteran status into account when determining eligibility, and most PHAs and property owners do not offer a veterans’ preference. As a result, these policy decisions likely focus resources on other types of low-income households with housing needs. Third, although low-income households generally are eligible to receive rental assistance from HUD’s three programs, statute requires that a certain percentage of new program participants must be extremely low income. These targeting requirements may lead to a higher share of HUD rental assistance going to nonveteran households because veteran households generally are less likely to fall within the extremely low-income category. According to HUD, other federal rental assistance programs (such as IRS’s Low-Income Housing Tax Credit, HUD’s HOME, and USDA’s rental assistance programs) also can provide assistance to veterans. Thus, the share of veterans receiving HUD rental assistance does not reflect the share of veterans that receive some other form of federal rental assistance. Furthermore, according to HUD, veterans may be more likely to receive rental assistance from some of these other programs, in part because these other programs do not target extremely low-income households as do HUD’s voucher, public housing, and project-based programs. However, data are not available to determine the extent to which veterans may be benefiting from other forms of federal rental assistance. In fiscal year 2005, HUD’s rental assistance programs reached an estimated 250,000 low-income veteran households, which constituted approximately 6 percent of all HUD-assisted households. The housing choice voucher program served the largest number of veteran households, followed by the project-based program, and the public housing program (see fig. 5). However, a slightly higher proportion of veteran households participated in the public housing program (6.9 percent) than participated in the voucher (5.7 percent) and project-based (5.2 percent) programs. We found some similarities in the demographic characteristics of veterans and other assisted households we analyzed. For example, compared with other assisted households, HUD-assisted veteran households were as likely to be elderly. Specifically, in fiscal year 2005, about 75,000, or 30 percent, of assisted veteran households were elderly, and about 1.3 million, or 31 percent, of other assisted households were elderly. About 40,000, or 54 percent, of these elderly veteran households received assistance through project-based programs. Public housing provided rental assistance to about 20,000 elderly veteran households and vouchers to about 15,000. HUD-assisted veteran households were more likely to have a disability. In fiscal year 2005, HUD provided assistance to about 88,000 veteran households with a disability, or about 34 percent of assisted veteran households. In comparison, 1.2 million or 28 percent of other assisted households had a disability. Among veteran households with a disability, about 41,000 (or 46 percent) received assistance from vouchers. Public housing and project-based programs each provided rental assistance to less than one-third of these households with a disability (about 24,000 and 23,000, respectively). Appendix V contains more detailed information about the number and percentages of HUD-assisted veteran households in each state and the District of Columbia. We provided VA and HUD with a draft of this report for review and comment. In an e-mail from its Office of Congressional and Legislative Affairs, VA agreed with the findings that related to VA and offered no other comments. HUD provided comments in a letter from the Deputy Assistant Secretary for Public Housing and Voucher Programs, Office of Public and Indian Housing; this letter is reprinted in appendix VI. The Assistant Secretary’s letter states that “HUD objects to the characterization that policies for its three major rental assistance programs generally do not take veteran status into account when determining eligibility or assistance levels” and notes that “HUD cannot mandate that a PHA establish any particular type of preference” for their voucher program. Our report does not state that HUD can mandate preferences for any of the three major rental assistance programs but rather acknowledges that the Quality Housing and Work Responsibility Act of 1998 repealed federally mandated preferences and provided individual PHAs and property owners with the authority to establish preferences, including a veterans’ preference. Furthermore, how veteran/nonveteran status affects eligibility for HUD programs is distinct from whether or not a preference is extended once eligibility has been established. As our report states, our reporting objectives addressed both of these issues: (1) how HUD’s rental assistance programs treat veteran status (that is, whether a person is a veteran or not) and veteran-specific benefits in determining eligibility and subsidy amounts and (2) the extent to which PHAs and property owners participating in HUD’s rental assistance programs establish a veterans’ preference in their administrative and tenant selection plans. In our review of program eligibility policies and regulations and interviews with agency officials, we found no evidence that veteran status is a factor in determining eligibility for HUD’s programs, and HUD’s comment letter did not provide any evidence. Accordingly, we did not change our report in this regard. Our report states that, in determining eligibility for its programs, HUD generally does not distinguish between income that is specific to veterans and other sources of income. In its comments, HUD stated that the department’s policies exclude specific types of benefits that some veterans may receive, such as health care benefits and income from job training programs. Our report acknowledges that certain types of veteran-specific income sources are considered as income for determining eligibility and subsidy amounts, but notes that it is the type of income that matters— such as whether or not it is recurring—not the source. Our report specifically states that “when calculating applicants’ incomes, HUD excludes most VA-provided benefits, such as payments for training and education or health care services, but includes veterans’ pensions, disability payments, and survivor benefits, which are recurring payments.” Accordingly, we did not change our report in response to HUD’s comment. HUD also commented on our methodology for estimating the extent of veterans being served in HUD’s programs. Specifically, HUD noted that since information for all veterans in VA’s database may not be complete, our estimate of 250,000 veterans assisted by HUD’s programs in 2005 would be affected. As our report states, we matched data from HUD on program participants with data from VA on living veterans using unique identifying information and used these matched data to estimate the percentage of low-income veteran renter households that receive HUD rental assistance. Our report notes that this could be an underestimate of the actual number of veteran households in the programs because of incomplete or erroneous data in either VA’s or HUD’s databases. In cases where we had incomplete information, such as missing Social Security numbers, we attempted alternate ways of identifying HUD-assisted veteran households, including matching records using both names and date of birth only. We continue to believe that our estimate is a reasonable measure of the extent to which HUD-assisted households are veteran households. However, in response to HUD’s comment, we changed our report to say “at least 250,000” in order to acknowledge the possible undercount. We are sending copies of this report to interested Members of Congress, the Secretary of Housing and Urban Development, and the Secretary of Veterans Affairs. We also will make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-8678 or woodd@gao.gov if you or your staff has any questions about this report. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. The Department of Housing and Urban Development’s (HUD) housing assistance programs in our scope include the three major rental assistance programs—housing choice voucher (voucher), public housing, and project-based programs (including the project-based Section 8, Section 202 Supportive Housing for the Elderly, and Section 811 Supportive Housing for Persons with Disabilities programs). To determine the income status and demographic and housing characteristics of veteran households, we analyzed data from the U.S. Bureau of the Census’s (Census) 2005 American Community Survey (ACS), which identified households’ veteran status, income, and other demographic characteristics, in conjunction with HUD’s defined income categories: low (80 percent or less of area median income or AMI), very low (50 percent or less of AMI), and extremely low (30 percent or less of AMI). ACS is an annual survey conducted by Census to obtain current information about the demographic, socioeconomic, and housing characteristics of all U.S. communities nationwide. ACS is scheduled to replace the traditional long-form survey in the decennial census, beginning in 2010. As of January 2005, ACS collected information for 3,141 counties, American Indian reservations, Alaska Native tribal areas, and Hawaiian homelands in the United States. Using HUD’s income limits for fiscal year 2005, we estimated, by geographic area, the number of veteran households that were in each income category. We also used information on veteran households in ACS to describe their demographics, as well as the cost and quality of their housing. Specifically, we obtained information on the household’s tenure (renter- or owner-occupied), disability status, elderly status, race and ethnicity, housing affordability categories (for example, households that paid 30 percent or less, 30.1 to 50 percent, and more than 50 percent of household income in rent), extent of overcrowding, and indicators of housing quality. Census prepared tabulations of these results based on our specifications. ACS is the largest household survey in the United States, with an annual sample size of about 3 million addresses. The ACS survey uses probability sampling, which helps ensure the integrity of sample survey results and that they are representative. Because a survey produces estimates of the whole population using only a portion of the population, all survey estimates contain sampling errors. This means that the estimates derived from the sample would be different if the survey had selected another sample. Since each sample could have provided different estimates, we express our confidence in the precision of this sample’s results as 90 percent confidence intervals. This is the interval that would contain the actual population value for 90 percent of the samples that could have been drawn. As a result, we are 90 percent confident that each of the confidence intervals will include the true values in the study population. In this report, instead of providing the upper and lower confidence bounds, we provide margin of error, which is the difference between an estimate and its upper or lower confidence bound. We express margin of error as a percentage (for example, plus or minus 7 percent). The sample for the 2005 ACS does not contain information on all veterans in the United States. Specifically, the sample design does not include individuals who live in group quarters—which include college dormitories, correctional facilities, and certain types of nursing facilities and hospitals—or homeless individuals. As a result, ACS likely underestimates the number of veterans to the extent that veterans live in group quarters or are homeless. We assessed the reliability of the data we received from Census by reviewing relevant documentation, interviewing knowledgeable officials, performing electronic testing of the data, and replicating published tables. In addition, we reviewed Census’ quality review process to ensure the completeness and accuracy of the tabulation that Census prepared at our request. We determined that the data are reliable for the purposes of this report. To determine whether HUD’s rental assistance programs take veteran status into account when determining eligibility and subsidy amount, we reviewed HUD’s policies and regulations for the voucher, public housing, and project-based programs. To assess how these programs treat veteran- specific income and benefits, we reviewed HUD’s policies and regulations that define annual income, which is used to determine eligibility and calculate subsidy amounts. We also interviewed officials from HUD and the Department of Veterans Affairs (VA). To determine whether public housing agencies (PHA) and property owners participating in HUD’s programs have established a veterans’ preference, we interviewed officials from the 41 largest PHAs that administer the public housing program (34 PHAs) and the voucher program (41 PHAs) and the 13 largest performance-based contract administrators (PBCA) that oversee property management under the project-based rental assistance programs. Specifically, the PHAs and PBCAs that we interviewed were responsible for administering or overseeing more than half of the dollar assistance provided through each of the three programs in fiscal year 2005. However, the information on preferences cannot be statistically generalized to the other PHAs and property owners. We reviewed HUD’s policies and regulations for establishing preferences and obtained information from officials on the extent to which preferences, particularly a veterans’ preference, were used for tenant selection purposes. Additionally, we obtained and analyzed studies by HUD and others on the use of preferences in general. To determine the extent to which HUD’s rental assistance programs served veteran households in fiscal year 2005, we matched data from HUD on program participants with data from VA on living veterans and used these matched data to estimate the percentage of low-income veteran renter households that received HUD assistance. To determine the extent to which veteran households were served by HUD’s rental assistance programs, we obtained information on households receiving rental assistance from HUD’s administrative databases—Public and Indian Housing Information Center (PIC) and Tenant Rental Assistance Certification System (TRACS), as of September 30, 2005, and information on all living veterans from VA’s Beneficiary Identification and Records Location Subsystem (BIRLS), as of October 1, 2004. We matched data from HUD on program participants with data from VA on living veterans. Specifically, we matched the Social Security numbers, first and last names, and date of birth of the assisted households in PIC and TRACS with the corresponding information for veterans in BIRLS. For the records in PIC and TRACS that were matched to BIRLS, about 65 percent matched on Social Security number, first and last names, and date of birth; about 30 percent matched on Social Security number and some combination of names and date of birth; and about 5 percent matched on names and date of birth only. We used the resulting matched information to determine the number of veteran households that received rental assistance from HUD and the annual subsidy amount that HUD paid to veteran households in 2005. Our totals of HUD-assisted veteran households could underestimate the actual number of veteran households in the programs because of a lack of complete information on all living veterans in the data we obtained from VA. For example, Social Security numbers, which we used to match VA and HUD data, may not have been available for all veterans who served in the 1970s or earlier. However, we attempted to adjust for this by also conducting a match on veterans’ names and dates of birth only. Data entry errors in both VA and HUD systems also could contribute to fewer successful matches. To assess the reliability of the HUD data from the PIC and TRACS databases, and the VA data from the BIRLS database, we reviewed relevant documentation, interviewed knowledgeable officials, and conducted electronic testing of the data. We determined the data were sufficiently reliable for us to identify veterans who received assistance through HUD rental programs. For all of our research objectives, we consulted with officials from various housing and veterans groups, including Harvard University’s Joint Center on Housing Studies, the National Low Income Housing Coalition, the National Coalition of Homeless Veterans, the Corporation for Supportive Housing, Vietnam Veterans of America, the American Legion, and Volunteers of America. We also surveyed the literature on these topics. We conducted our work primarily in Atlanta, Boston, Chicago, Los Angeles, and Washington, D.C., from March 2006 through July 2007 in accordance with generally accepted government auditing standards. Households with an affordability problem (±9%) (±6%) (±6) (±15%) (±8%) (±8) (±9%) (±4%) (±4) (±10%) (±5%) (±5) (±4%) (±2%) (±2) (±8%) (±5%) (±5) (±11%) (±7%) (±7) (±22%) (±13%) (±13) (±20%) (±12%) (±12) (±5%) (±3%) (±3) (±7%) (±4%) (±4) (±14%) (±7%) (±7) (±14%) (±8%) (±8) (±6%) (±3%) (±3) (±8%) (±4%) (±4) (±10%) (±5%) (±5) Households with an affordability problem (±9%) (±5%) (±5) (±9%) (±4%) (±4) (±11%) (±6%) (±6) (±14%) (±7%) (±7) (±10%) (±5%) (±5) (±9%) (±5%) (±5) (±7%) (±3%) (±3) (±9%) (±4%) (±4) (±14%) (±8%) (±8) (±7%) (±3%) (±3) (±14%) (±7%) (±7) (±13%) (±6%) (±6) (±11%) (±5%) (±5) (±15%) (±7%) (±7) (±8%) (±5%) (±5) (±16%) (±7%) (±7) (±5%) (±2%) (±2) Households with an affordability problem (±8%) (±4%) (±4) (±15%) (±8%) (±8) (±5%) (±2%) (±2) (±10%) (±6%) (±6) (±9%) (±4%) (±4) (±5%) (±2%) (±2) (±15%) (±7%) (±7) (±10%) (±6%) (±6) (±18%) (±7%) (±7) (±8%) (±5%) (±5) (±5%) (±2%) (±2) (±17%) (±9%) (±9) (±20%) (±11%) (±11) (±7%) (±3%) (±3) (±5%) (±4%) (±4) (±14%) (±9%) (±9) (±7%) (±3%) (±3) Households with an affordability problem (±26%) (±13%) (±13) (±1%) (±2%) (±0.6%) (+6%) (+8%) (+3%) (+1%) (+2%) (+1%) (+7%) (+9%) (+4%) (+2%) (+2%) (+1%) (+8%) (+11%) (+4%) (+2%) (+3%) (+1%) (+9%) (+12%) (+5%) (+3%) (+4%) (+2%) (+9%) (+11%) (+4%) (+3%) (+4%) (+2%) (+12%) (+14%) (+6%) (+3%) (+4%) (+1%) (+10%) (+12%) (+5%) (+4%) (+5%) (+2%) (+11%) (+14%) (+5%) (+3%) (+ 4%) (+1%) (+11%) (+15%) (+6%) (+4%) (+5%) (+2%) (+10%) (+14%) (+5%) (+ 4%) (+5%) (+2%) (+10%) (+15%) (+6%) (+4%) (+5%) (+2%) (+9%) (+12%) (+5%) (+3%) (+4%) (+1%) (+12%) (+15%) (+5%) (+ 4%) (+5%) (+2%) (+8%) (+11%) (+5%) (+ 4%) (+5%) (+2%) (+12%) (+16%) (+6%) (+ 4%) (+ 5%) (+2%) (+13%) (+15%) (+5%) (+5%) (+5%) (+2%) (+12%) (+14%) (+5%) (+5%) (+6%) (+2%) (+12%) (+17%) (+5%) (+ 5%) (+6%) (+2%) (+11%) (+12%) (+5%) (+ 4%) (+5%) (+2%) (+13%) (+19%) (+8%) (+ 5%) (+6%) (+3%) (+11%) (+15%) (+6%) (+4%) (+5%) (+2%) (+12%) (+16%) (+6%) (+ 5%) (+6%) (+2%) (+14%) (+5%) (+5%) (+6%) (+3%) (+12%) (+17%) (+6%) (+5%) (+ 6%) (+2%) (+13%) (+19%) (+8%) (+4%) (+5%) (+2%) (+11%) (+15%) (+5%) (+5%) (+ 7%) (+3%) (+15%) (+19%) (+8%) (+5%) (+ 6%) (+2%) (+16%) (+19%) (+7%) (+5%) (+6%) (+2%) (+13%) (+18%) (+6%) (+5%) (+5%) (+2%) (+15%) (+20%) (+7%) (+6%) (+8%) (+3%) (+15%) (+20%) (+8%) (+6%) (+ 8%) (+3%) (+13%) (+16%) (+7%) (+6%) (+7%) (+3%) (+12%) (+16%) (+6%) (+6%) (+ 7%) (+3%) (+13%) (+20%) (+7%) (+5%) (+ 7%) (+3%) (+14%) (+17%) (+4%) (+ 5%) (+6%) (+2%) (+15%) (+21%) (+8%) (+ 5%) (+ 6%) (+19%) (+26%) (+11%) (+7%) (+9%) (+3%) (+17%) (+26%) (+10%) (+6%) (+8%) (+3%) (+18%) (+24%) (+9%) (+ 5%) (+7%) (+3%) (+19%) (+22%) (+7%) (+ 8%) (+ 10%) (+3%) (+15%) (+22%) (+7%) (+7%) (+9%) (+3%) (+19%) (+24%) (+9%) (+8%) (+9%) (+3%) (+19%) (+23%) (+9%) (+6%) (+8%) (+3%) (+16%) (+20%) (+7%) (+7%) (+8%) (+3%) (+15%) (+24%) (+9%) (+6%) (+8%) (+3%) (+17%) (+21%) (+7%) (+7%) (+9%) (+3%) (+22%) (+34%) (+13%) (+7%) (+9%) (+3%) (+19%) (+27%) (+9%) (+7%) (+ 9%) (+3%) (+17%) (+23%) (+9%) (+6%) (+8%) (+3%) (+17%) (+21%) (+8%) (+7%) (+ 8%) (+3%) Historically, Congress has recognized the importance of providing supportive services to veterans who are homeless or at risk of becoming homeless. Most of HUD’s rental assistance programs are not required to provide supportive services, with the exception of the Section 202 Supportive Housing for the Elderly and Section 811 Supportive Housing for Persons with Disabilities programs. However, households participating in HUD’s rental assistance programs can receive supportive services, typically through separate programs funded by HUD. Table 11 contains descriptions of these programs. When determining eligibility and subsidy amounts under HUD’s rental assistance programs, program administrators generally must calculate a household’s adjusted annual income, or gross income, less any exclusions and deductions. HUD’s policies and statute provide for 39 different types of income exclusions and 5 deductions. When determining income eligibility and subsidy amounts, HUD generally does not distinguish between income sources that are specific to veterans, such as benefits that VA provides and other types of incomes. As table 12 shows, most types of income sources and benefits that veteran households receive from VA would be excluded by HUD when determining eligibility and subsidy amounts. Excluded income sources and benefits generally relate to payments that veteran households receive under certain economic self- sufficiency programs or nonrecurring payments such as insurance claims. Of the benefits included, most are associated with recurring or regular sources of income, such as disability compensation, pensions, and survivor death benefits. In addition to the individual named above, Daniel Garcia-Diaz, Assistant Director; Carl Barden; Michelle Bowsky; Mark H. Egger; Cynthia Grant; John T. McGrail; Marc Molino; Josephine Perez; Carl Ramirez; Barbara Roesmann; and Rose M. Schuville made key contributions to this report.
Veterans returning from service in Iraq and Afghanistan could increase demand for affordable rental housing. Households with low incomes (80 percent or less of the area median income) generally are eligible to receive rental assistance from the Department of Housing and Urban Development's (HUD) housing choice voucher, public housing, and project-based programs. However, because rental assistance is not an entitlement, not all who are eligible receive assistance. In response to a congressional mandate, GAO assessed (1) the income status and demographic and housing characteristics of veteran renter households, (2) how HUD's rental assistance programs treat veteran status (whether a person is a veteran or not) and whether they use a veterans' preference, and (3) the extent to which HUD's rental assistance programs served veterans in fiscal year 2005. Among other things, GAO analyzed data from HUD, the Department of Veterans Affairs (VA), and the Bureau of the Census, surveyed selected public housing agencies, and interviewed agency officials and veterans groups. GAO makes no recommendations in this report. VA agreed with the report's findings. HUD objected to the characterization in the report regarding HUD's policies on veteran status and program eligibility and subsidy amounts. In 2005, an estimated 2.3 million veteran renter households had low incomes. The proportion of veteran renter households that were low income varied by state but did not fall below 41 percent. Further, an estimated 1.3 million, or about 56 percent of these low-income veteran households, had housing affordability problems--that is, rental costs exceeding 30 percent of household income. Compared with other (nonveteran) renter households, however, veterans were somewhat less likely to be low income or have housing affordability problems. HUD's policies for its three major rental assistance programs generally do not take veteran status into account when determining eligibility or assistance levels, but eligible veterans can receive assistance. Also, HUD generally does not distinguish between income that is specific to veterans, such as VA-provided benefits, and other sources of income. The majority of the 41 largest public housing agencies that administer the housing choice voucher or public housing programs have no veterans' preference for admission. The 13 largest performance-based contract administrators that oversee most properties under project-based programs reported that owners generally did not adopt a veterans' preference. In fiscal year 2005, an estimated 11 percent of all eligible low-income veteran households (at least 250,000) received assistance, compared with 19 percent of nonveteran households. Although the reasons for the difference are unclear, factors such as differing levels of need for affordable housing among veteran and other households could influence the percentages.
First identified in 1981, HIV impairs the immune system and leaves affected individuals susceptible to certain cancers and infections. HIV, the virus that causes AIDS, affects specific cells of the immune system. Over time, HIV can destroy so many of these cells that the body cannot fight off infections and disease, leading to AIDS. A person who has the HIV virus can move in and out of AIDS status, which is the third stage of the disease. Despite the number of deaths from AIDS and the steady increase of HIV prevalence, there have been successes in the fight against the disease. Developments in treatment have enhanced care options and can extend the lives of those with HIV. The introduction of highly active antiretroviral therapy in 1996 was followed by a decline in the number of deaths and new AIDS cases in the United States for the first time since the beginning of the disease. Since 1981, over 1.2 million persons diagnosed with AIDS have been reported to the CDC and over 600,000 of them have died. The CDC estimates that of the more than 1.2 million persons living with HIV in December 2011, some 14 percent had not been diagnosed and might not be unaware of their status. In 2010, the White House’s Office of National AIDS Policy issued a national strategy for addressing HIV and AIDS in the United States. The strategy has three primary goals: (1) reduce the number of persons who become infected with HIV, (2) increase access to care and improve health outcomes for persons living with HIV, and (3) reduce HIV-related health disparities. To accomplish these goals, the strategy calls for a coordinated national response to the disease. Congress created the HOPWA program in 1990 under the National Affordable Housing Act, authorizing grants for housing activities and supportive services designed to prevent homelessness among persons with HIV. Specifically, HOPWA grants are used to provide a wide range of housing-related services, including rental assistance; operating costs for housing facilities; short-term rent, mortgage, and utility payments; permanent housing placement and housing information services; resource identification (to establish, coordinate and develop housing assistance); acquisition, rehabilitation, conversion, lease, and repair of facilities; new construction (for single-room occupancy dwellings and community residences only); and supportive services (case management and mental health, alcohol and drug abuse, and nutritional services). To be eligible for HOPWA, individuals must be HIV positive and low income (below 80 percent of area median income). HOPWA assists persons who are without stable housing arrangements, including those at severe risk of homelessness (e.g., persons in emergency shelters; persons living in a place not meant for human habitation, such as a vehicle or abandoned building; or persons living on the streets). HUD awards 90 percent of the annual HOPWA appropriation by formula to eligible metropolitan statistical areas (MSA) and states. On the basis of the statute, MSAs with populations greater than 500,000 and more than 1,500 cumulative cases of AIDS are eligible for HOPWA formula grants. The most populous city in an eligible MSA serves as that area’s HOPWA grantee. In addition, states with more than 1,500 cumulative cases of AIDS in areas outside of eligible MSAs qualify for formula funds.aside for grants awarded on a competitive basis. Congress enacted the Ryan White Comprehensive AIDS Resources Emergency Act of 1990 (CARE Act) to improve the availability and quality of community-based health care and support services for individuals with HIV and their families. The CARE Act was most recently reauthorized through the Ryan White HIV/AIDS Treatment Extension Act of 2009. HRSA administers the Ryan White HIV/AIDS program.program must be the payer of last resort, meaning that other sources of funds for services, including housing services, must be exhausted before using Ryan White HIV/AIDS program funds. Ryan White Part A provides formula funds to Eligible Metropolitan Areas and Transitional Grant Areas. To qualify for Eligible Metropolitan Area status, an area must have reported at least a cumulative total of 2,000 AIDS cases in the most recent 5 years and have a population of at least 50,000. To be eligible for Transitional Grant Area status, an area must have a cumulative total of 1,000, but fewer than 2,000 cases of AIDS in the most recent 5 years and have a population of at least 50,000.absence of a waiver, Ryan White Part A grantees are required to spend at least 75 percent of their grant on core medical services and no more than 25 percent on supportive services, which include housing In the assistance. Ryan White HIV/AIDS program-funded housing assistance provides short-term aid to support emergency, temporary, or transitional housing so that an individual or family can gain or maintain health care.HRSA guidance encourages but does not require grantees to limit housing assistance to 24 months. Additionally, housing assistance must be accompanied by a strategy to transition the individual or family to stable, permanent housing. Ryan White Part A grantees are required by the Ryan White HIV/AIDS Treatment Extension Act of 2009 to establish a Ryan White Part A Planning Council, which is appointed by the chief elected official of the city or county. The council is responsible for setting HIV-related service priorities and allocating grant funds based on the needs of persons with HIV. Planning councils are required to develop a comprehensive plan with the Ryan White Part A grantee for the provision of services. The Ryan White HIV/AIDS Treatment Extension Act of 2009 identifies 13 different parties that must be involved in the council, including representatives from community-based organizations serving affected populations, persons with HIV, and grantees providing services in the area under other federal HIV programs. Both HOPWA and Ryan White Part A funds are awarded to government agencies, which are referred to as “grantees” (see fig. 1). For the HOPWA program, the formula grantee is generally either the city office dedicated to housing and community development or the city health department. HOPWA grantees may carry out eligible program activities themselves, through any of their administrative agencies, or through a project sponsor. A project sponsor can be any nonprofit organization or governmental housing agency that receives funds from a grantee to carry out eligible HOPWA activities. The grantees and project sponsors may also contract with for-profit entities to provide services associated with their HOPWA activities. For the Ryan White Part A program, grants are awarded to the chief elected official of the city or county that provides health-care services. The chief elected official is legally the grantee but usually chooses a department or other entity to manage the grant, and that entity is then referred to as the grantee. Ryan White Part A grantees are generally county or city health departments or public departments with responsibility for health. Part A grants consist of formula and supplemental components. Formula grants are based on reported living cases of HIV and AIDS in eligible areas. Supplemental grants are awarded competitively and are based on the ability of Eligible Metropolitan or Transitional Grant Areas to document both a demonstrated need for additional funds and the capacity to use them to meet community needs. Ryan White Part A grantees can deliver services to persons with HIV (clients) directly or through a subgrantee. Subgrantees are generally community-based, nonprofit organizations. In some cases, a city’s formula HOPWA grantee and Ryan White Part A grantee are the same entity. Also, in some cases local community-based organizations receive both HOPWA and Ryan White Part A funding. As the number of persons with HIV in the United States continues to increase, research finds that stable housing is critical for effective medical care and is associated with improved health outcomes for persons with HIV. The extent to which persons with HIV need housing assistance is not known, in part because HUD’s estimates of the housing needs of persons living with HIV are not reliable. In addition, the statutory HOPWA funding formula may not be effectively distributing grant funds to communities with the greatest need because the formula counts persons who are deceased. As a result, HOPWA funds may not be targeted as effectively as they could be. According to CDC estimates, there were about 50,000 HIV diagnoses each year from 2008 to 2012. In 2012, the estimated rate of diagnosed HIV infections in the United States was 15.3 per 100,000 population. Rates of diagnosis of HIV infection have varied by region from 2008 to 2012. For example, the rate of diagnosis of HIV infection increased from 2008 through 2012 in the Midwest, and decreased during this period in the Northeast, South, and West. In 2012, the rates of diagnosed HIV infection were highest in the South, followed by the Northeast, West, and Midwest, as shown in figure 2. According to CDC data, from 2008 through 2011, the estimated number of persons in the United States living with a diagnosed HIV infection, or the prevalence of diagnosed HIV infection, increased. The prevalence rate, or the number of persons living with diagnosed HIV infection per 100,000 population, was estimated to be nearly 283 at the end of 2011. Prevalence rates vary by region, and regional differences have remained relatively stable from 2008 through 2011. As shown in figure 3, prevalence rates of diagnosed HIV infection are highest in the Northeast, followed by the South, West, and Midwest. The estimated rates of HIV diagnoses have varied over time across different demographic groups. For example, from 2008 through 2012 the rates of diagnosed HIV infection increased among persons aged 13 to 14 and 20 to 29 and either remained stable or decreased among other age groups. Rates of diagnoses during this period also increased for American Indian/Alaska natives and Asians, while decreasing for African- Americans, Hispanics/Latinos, and persons of multiple races. In 2012, the estimated rate of HIV diagnoses for African-Americans was 58 per 100,000 population—the highest rate compared to other racial and ethnic groups. From 2008 through 2012, rates of HIV diagnoses decreased among females and remained stable for males. In 2012, males accounted for 80 percent of all diagnoses newly reported among adults and adolescents. Stable housing is critical for persons with HIV. Staff from several HIV/AIDS advocacy groups told us that stable housing was important because many persons with HIV were required to adhere to strict regimens for taking medicine. Some medicines require refrigeration, and some cause debilitating side effects. Health care officials from CDC told us that without stable housing, persons may not reach viral suppression In addition, the National HIV/AIDS or remain connected to medical care. Strategy states that access to housing is an important precursor to getting many people into a stable treatment regimen. Individuals living with HIV who lack stable housing are more likely to delay HIV care, have poorer access to regular care, are less likely to receive optimal antiretroviral therapy, and are less likely to adhere to therapy. A 2007 study emphasized the relationship between housing assistance provided to persons living with HIV and increased access to medical care and appropriate treatment. The need for housing is prevalent among persons living with HIV, and there is strong evidence that receipt of housing assistance has a direct impact on improved medical care outcomes.Research has also indicated that persons with HIV who live in stable housing have better health outcomes than those who are homeless or unstably housed. However, while stable housing is critical for effective medical care, persons with HIV often have difficulty maintaining stable housing because of the financial vulnerability that can be associated with the disease. As individuals become ill, they may find themselves unable to work, while at the same time facing health care expenses that leave few resources to pay for housing. According to a recent study, housing challenges for a person living with HIV may include the growing disparity between income and the cost of rental housing, loss of income due to inability to maintain employment, and loss of spouse or partner due to HIV-related death, among other things. homelessness among persons with HIV. In addition, those who are homeless may be more likely to engage in activities through which they could transmit HIV. Aidala and others, “Housing Need.” members from HOPWA and Ryan White Part A grantees we interviewed told us that there was an increasing need for housing assistance for persons with HIV. Some staff told us that infected persons were living longer as a result of advances in medical care. Moreover, staff from several grantees told us that these persons generally needed both medical and nonmedical supportive services. Additionally, HUD officials noted that as local housing costs increased, the need for programs that provided affordable housing increased for all low-income people, including those with HIV. HUD’s estimate of the number of persons with HIV who have a housing need is not reliable. HUD requires each formula and competitive HOPWA grantee to report annually the number of HOPWA-eligible persons who have an unmet housing need within the grantee’s jurisdiction. HUD then develops an estimate of the number of persons nationwide with HIV who have an unmet housing need by totaling the numbers reported by each grantee. For 2013, HUD reported that approximately 131,000 HIV-positive persons had unmet housing needs. HUD uses this information to justify its HOPWA budget requests and to report on the program’s performance. HIV advocacy groups use HUD’s estimates in their publications and outreach efforts to Congress. We found that HOPWA grantees used different methodologies to report unmet housing needs, limiting the reliability of the reported information. Grantees we met with used varying methods to produce the local unmet need estimates that they reported to HUD annually. For example, officials from one HOPWA grantee told us that they summed the unmet housing need data provided by their project sponsors. In contrast, officials from another HOPWA grantee use various data sources to produce both a low and high estimate of unmet housing need and have historically reported both numbers to HUD. In its 2010 and 2011 Consolidated Annual Performance and Evaluation Reports (CAPER) Reports, this grantee reported to HUD that the unmet need in its community could range from a low of approximately 7,500 persons to a high of 15,000. HUD officials told us that, at the time of our review, they did not require HOPWA grantees to use a consistent methodology to calculate unmet housing need for each jurisdiction. They told us that this policy was intended to allow for local flexibility, so that the data were collected using the most appropriate method for each jurisdiction. According to HUD’s CAPER guidance, grantees can use one or more of seven data sources to calculate unmet need, including data from prisons or jails on persons being discharged with HIV and housing providers’ waiting lists.Grantees are required to indicate on their CAPERs all of the data sources they use to estimate unmet need. However, HUD does not provide additional guidance on how grantees should use the data sources in a comparable manner. In June 2014 HUD granted a HOPWA technical assistance contractor a 1-year contract extension to help the agency address its unmet needs methodology, to include soliciting community feedback at the U.S. Conference on HIV/AIDS. HUD convened stakeholders and HOPWA grantees at this conference to discuss how unmet needs were estimated, and participants discussed establishing a working group to develop a consistent methodology. According to HUD, as of February 2015, the agency was working with its technical assistance contractor to develop a methodology and provide communities CDC data related to persons with HIV. However, according to HUD, the agency does not have specific goals or time frames for finalizing a standard methodology. GAO’s work on assessing data reliability indicates that data should be consistent—that is, data should be clear and well defined enough to yield similar results in similar analyses. Further, when data are entered at multiple sites or reported using multiple sources (as in the case of HOPWA program), there is a risk that data entry rules may be interpreted inconsistently, resulting in data that, taken as a whole, are unreliable. In addition, federal internal control standards state that program managers need operational data to determine whether they are meeting their goals for effective and efficient use of resources. In our 1997 report on HOPWA and the Ryan White HIV/AIDS program, we concluded that equitable distribution of resources should be consistent with the current need for such resources. Because HUD does not require grantees to use selected data sources in a consistent manner, the resulting information is not comparable. Further, the usefulness and reliability of these data as an indicator of the unmet housing needs of persons with HIV are unclear. Although data on unmet housing needs are not used to determine HOPWA formula funding amounts, such information would be helpful in determining the extent of the need for HOPWA funds in specific areas, as well as the extent to which HOPWA is meeting its goals of addressing the housing needs of persons with HIV. As previously discussed, 90 percent of HOPWA funds are awarded through formula grants to eligible states and MSAs. Seventy-five percent of these formula-based funds are awarded to cities and states that meet certain threshold criteria. These criteria are based on each jurisdiction’s share of the number of cumulative AIDS cases in all eligible jurisdictions. Cumulative AIDS case counts include both living and deceased AIDS cases reported in the grantees’ jurisdiction since the beginning of the AIDS epidemic in 1981. Use of cumulative AIDS cases rather than living HIV cases has led to MSAs with similar numbers of persons living with HIV receiving markedly different amounts of HOPWA funding. For example, in fiscal year 2012 a grantee in the South and a grantee in the Northeast both had about 2,300 persons living with HIV, according to CDC data. However, the grantee in the Northeast received about $154,000 more in HOPWA formula funding than the grantee in the South because it had approximately 776 more reported cumulative AIDS cases. Similarly, in the same fiscal year, both a HOPWA formula grantee in the West and one in the South had about 3,500 persons living with HIV. However, the grantee in the West received nearly $319,000 more in formula funding than the grantee in the South because it had about 1,600 more reported cumulative AIDS cases. The difference between cumulative AIDS cases and living HIV cases is more pronounced in some MSAs than others. As shown in figure 4, the relative difference ranged from less than 15 percent to more than 43 percent in the MSAs that received HOPWA formula funds in 2012. In most of these MSAs (62 of 78), the number of cumulative AIDS cases was greater than the number of persons living with HIV. For example, the New York City MSA had 35 percent more cumulative AIDS cases than cases of persons living with HIV. In contrast, about one-fifth of the MSAs that received HOPWA funds in fiscal year 2012 had more persons living with HIV than cumulative AIDS cases. For example, the Charlotte, North Carolina MSA had 43 percent more cases of persons living with HIV than cumulative AIDS cases. According to CDC officials, there can be more living HIV cases than cumulative AIDS cases because not all persons with HIV progress to the third stage of the disease (AIDS). Appendix II provides additional information on the numbers of cumulative AIDS cases and living HIV cases for all MSAs that received HOPWA grants in fiscal year 2012. We have assessed HOPWA’s funding formula in previous work. In 1997, we recommended that HUD consider the legislative changes that would be needed to make the HOPWA formula more reflective of current AIDS We also noted that the general principle of allocating grants on cases.the basis of the estimated number of persons living with HIV, excluding those who are deceased, would ensure a more equitable allocation of the available funds. In response, HUD reviewed potential changes to the formula. It compiled an analysis to show the effects of various alternatives on grantees’ funding levels, including use of 10-year weighted numbers to reflect living cases of persons with AIDS. However, at that time, HUD was reluctant to recommend any change that might disrupt funding for those who depended on HOPWA support. In 2006, we recommended that if Congress wanted HOPWA funding to more closely reflect the distribution of persons living with AIDS, it should consider changing the program so that HOPWA formula grant eligibility would be based on a measure of living AIDS cases.the funding formula for the Ryan White HIV/AIDS programs in 2006 but did not make the same change for HOPWA. Since our 2006 report, medical treatment for HIV/AIDS and the make-up of the national population with HIV or AIDS have continued to evolve. Additionally, CDC officials now consider HIV case counts to be more accurate and reliable than counts of AIDS cases alone because persons with HIV may live many years before progressing to AIDS and may move between stages as their health changes. HUD officials and the four HOPWA grantees we met with stated that the HOPWA funding formula was out of date. In its last three congressional budget justifications, HUD has proposed updating the formula. According to HUD’s 2015 budget justification, the HOPWA formula should be updated to better reflect the nature of the HIV epidemic that has evolved over the years through advances in HIV care and the increasingly disproportionate impact on impoverished persons with HIV. HUD has proposed basing the funding formula on living HIV rather than cumulative AIDS cases and on consideration of local housing costs and poverty rates. HUD recognized that some communities could lose funds as a result of a redistribution of grant funds. To mitigate any potential negative impacts of large funding reductions on some communities, HUD has also proposed incrementally reducing funding over time. HUD’s projections based on its proposed formula change—using living HIV cases instead of cumulative AIDS cases and data on housing costs and poverty—show a redistribution of funds that results in funding increases for some communities and decreases for others. For example, based on HUD’s 2015 projections of HOPWA award amounts, the New York City MSA’s award would decrease by about $5 million from HUD’s 2014 estimated award amount. In contrast, smaller MSAs, such as Charlotte North, Carolina, and Cleveland, Ohio would receive increases of more than $200,000 from HUD’s 2014 estimated award amounts. Although our analysis of CDC data suggests that the proportions of living HIV cases among the cities that received HOPWA funds in 2012 are similar to the proportions of cumulative AIDS cases, these changes could result in meaningful differences in the amounts of funding that some grantees receive. The Office of Management and Budget has also noted that the current formula for distributing HOPWA funds does not reflect the current nature of the disease. As discussed in GAO’s prior work, a cumulative count of AIDS cases that includes deceased persons does not necessarily reflect the number of living HIV cases in a particular year. In contrast, data on the number of persons living with HIV exclude the deceased and include persons in all stages of HIV infection. In addition, regional changes in the number of HIV cases may not be fully accounted for in the current HOPWA formula due to the continued inclusion of deceased persons. Reauthorizations of the Ryan White HIV/AIDS program in 2000, 2006 and 2009 required the use of living cases of both HIV and AIDS in the distribution of formula grants for Ryan White Parts A and B. Because HOPWA funds continue to be awarded based on cumulative AIDS cases, HOPWA funds are not being targeted as effectively or equitably as they could be. HOPWA grantees have used the majority of their grant funds to provide housing assistance to extremely low-income persons with HIV, primarily in the form rental assistance. In general, the majority of individuals who receive housing assistance through HOPWA are male, African-American, and extremely low income. Overall, a small share (about 2 percent) of total Ryan White Part A expenditures is used for housing. Individuals who receive temporary housing assistance through Ryan White Part A generally have the same demographic characteristics as HOPWA housing assistance recipients. Both HOPWA and Ryan White Part A information indicate that the majority of individuals provided with housing assistance became stably housed. However, the reliability of Ryan White Part A housing data is not clear because grantees do not update information on housing status consistently. Stakeholders such as HOPWA and Ryan White Part A grantees, as well as advocacy groups, note both strengths and challenges related to these programs. HOPWA grantees have primarily used their funds to provide housing assistance. As previously noted, grantees can use HOPWA funds for housing and supportive services and for administrative expenses. In 2012, the most recent program year for which data were available, HOPWA grantees spent nearly $314 million to assist persons with HIV. Of these expenditures, about $211 million (67 percent) was spent on housing assistance and $64 million (20 percent) were spent on supportive services, as shown in figure 5.receiving housing assistance decreased from around 60,000 in 2010 to about 56,000 in 2012. According to HUD officials, this decrease is likely due to improved grantee reporting as well as increases in the cost of housing—that is, as housing costs have increased, the program has been able to provide housing assistance to fewer persons. Housing assistance represented about 2 percent ($14 million) of the total expenditures of $592 million in fiscal year 2011 for all Ryan White Part A funding categories—including medical and supportive services. The largest category of program expenditures, $426 million, was for core medical services, followed by about $93 million for supportive services and about $73 million for clinical quality management and grantee administration. Under the Ryan White HIV/AIDS Treatment Extension Act of 2009, Ryan White Part A grantees are generally required to expend the majority of their funds on core medical services but can also fund Expenditures for the supportive services (including housing assistance).Ryan White Part A program also reflect the priorities established by Ryan White Part A Planning Councils. Of the $93 million grantees spent on supportive services, housing assistance made up about 15 percent (see fig. 7). Ryan White Part A grantees also spent supportive services funds on nonmedical case management, emergency financial assistance, food bank/home-delivered meals, and health education. Ryan White Part A data for calendar year 2012 indicate that the majority of the 13,556 clients who received housing assistance were African- American. The data also indicate that the majority of clients who received housing assistance had incomes at or below the federal poverty level.Table 2 summarizes selected demographic characteristics of persons who received housing assistance through Ryan White Part A in calendar year 2012. HUD’s 2012 HOPWA performance data show a variety of positive outcomes related to housing stability, access to care, and homelessness. For the HOPWA program, permanent, stable housing includes private housing without a subsidy, subsidized housing, and HOPWA-funded rental assistance or facility-based housing. According to HUD’s 2012 data, 96 percent of the households that received tenant-based rental assistance or lived in a HOPWA-funded permanent housing facility had stable housing; 92 percent of households had contact with primary care; 90 percent of clients accessed medical insurance; and 5,736 formerly homeless individuals were placed in housingAdditionally, HUD’s 2013 Performance Report indicates that the HOPWA program has contributed to the agency’s goal of preserving affordable rental housing. The report states that HOPWA had funded 25,706 rental units as of the end of fiscal year 2012, helping HUD exceed its fiscal year 2012-2013 agency priority goal of continuing to serve 5.4 million families and serving an additional 61,000 families. According to the performance report, HUD exceeded this goal by nearly 82,000 families. HOPWA officials also told us that the program’s contributions to providing permanent supportive housing supported HUD’s strategic objective for ending homelessness. HOPWA officials noted that the HOPWA program helped to keep persons with HIV from becoming homeless. HUD uses the data that grantees report on outcomes to summarize the achievements of individual grantees and the program as a whole. More specifically, HUD contractors review the information grantees submit and produce grantee-level and national summaries of performance for the formula HOPWA program, the competitive HOPWA program, and both programs combined. HUD posts these summaries, or performance profiles, on a HUD website. HRSA officials told us that the majority of clients provided with housing assistance through the Ryan White HIV/AIDS program obtained permanent, stable housing. According to a December 2013 White House report addressing the outcomes associated with the National HIV/AIDS Strategy, increasing the percentage of Ryan White HIV/AIDS program clients with permanent housing to 86 percent is one of nine indicators in the National HIV/AIDS Strategy. For the Ryan White HIV/AIDS program, stable, permanent housing includes unsubsidized rooms, houses, or apartments; subsidized housing; and permanent housing for formerly homeless persons. According to HRSA officials, the National HIV/AIDS Strategy indicator of Ryan White HIVAIDS program clients with permanent housing is measured using the data on the housing status that HRSA collects annually. HRSA gathers this information from Ryan White HIV/AIDS program grantees through the Ryan White HIV/AIDS Program Services (RSR) report. However, it is not clear that HRSA’s housing status data are current because HRSA does not require or encourage grantees to maintain current data on clients’ housing status. RSR instructions state that the housing status data element is the client’s housing status at the end of the reporting period. HRSA officials told us that the instructions were not intended to be used as guidance for local jurisdictions in determining how often each client’s housing status should be collected. The officials added that the frequency with which a client’s housing status should be updated was decided at the local level and that currently HRSA does not require grantees to assess a client’s housing status beyond the initial intake period. Staff from one Ryan White Part A grantee told us that information on housing status in the RSR report was not very reliable because each client’s housing status was recorded at the point of intake but might or might not be updated subsequently. Another Ryan White Part A grantee told us that some of its subgrantees only reported on clients’ housing status at the point of intake, even though they recertified clients’ eligibility for the program every 6 months. Internal control standards for the federal government state that events should be promptly recorded to maintain their relevance and value to management in controlling operations and making decisions. Because HRSA does not require grantees to ensure that their subgrantees regularly update data on each client’s housing status, the usefulness of these data to support housing-related outcomes is unclear. Among these outcomes, for example, the extent to which the Ryan White HIV/AIDS program is contributing to the National HIV/AIDS Strategy goal of improving access to permanent housing. Further, because the Ryan White HIV/AIDS program provides temporary housing assistance and clients’ housing status is likely to change frequently, housing data may not be as accurate and current as possible if they are not updated regularly. HOPWA grantees, project sponsors, and HIV advocacy groups noted several strengths of the design of the HOPWA program. For example, three of the eight HOPWA project sponsors that GAO interviewed and an HIV advocacy group stated that one strength of the program was that clients must be provided with supportive services. These stakeholders noted that HOPWA clients or other persons with HIV often had substance abuse issues and a mental illness and that supportive services that helped address these issues were critical to helping some clients become stable. Three HOPWA grantees noted that another strength of the program was the flexibility it offered to grantees, allowing them to fund the type of housing assistance that was most needed in their communities. Grantees that we visited funded a wide range of housing types, including a facility for persons with HIV who had mental, physical, or drug abuse issues; a facility for single adults who had progressed to AIDS and had a history of homelessness; and a hospice for HIV-positive persons. Finally, officials from four organizations that received both HOPWA and Ryan White Part A funding explained that HOPWA worked well with the Ryan White HIV/AIDS program. These officials explained that they took steps to transition Ryan White Part A clients who received temporary housing assistance into the HOPWA program, which offered permanent housing assistance. Also, in one of the cities we visited local program administrators emphasized that the programs were complementary and said that they used Ryan White Part A funds only for core medical services and nonmedical case management and HOPWA funds only for housing assistance. HOPWA grantees and project sponsors also identified weaknesses in the HOPWA program, including certain requirements, administrative fees, and the funding formula. Specifically, two of the four HOPWA grantees we met with noted that rental assistance generally could not exceed Fair Market Rent amounts, which HUD determined annually. Limiting rental assistance to Fair Market Rents is challenging, particularly in high-cost cities like New York City and San Francisco, where officials noted that the average price of an apartment was double the amount of the Fair Market Rent. Also, two of the four HOPWA grantees that we interviewed and HUD administrators of the HOPWA program stated that the administrative fee of 3 percent that grantees could retain from their HOPWA grant was low. HUD officials stated that other HUD programs had higher fees, including Community Development Block Grants (20 percent) and the Home Investment Partnerships Program (10 percent). Finally, staff from three HOPWA grantees, five organizations that receive HOPWA or Ryan White Part A funding, and HUD officials with responsibility for administering the HOPWA program told us that the funding formula needed to be updated so that it was based on the number of persons living with HIV. Officials from one HOPWA grantee stated that they understood the need to update the HOPWA funding formula but had concerns about potentially losing funding if cumulative HIV cases were excluded from the formula. As previously discussed, in congressional budget justifications for fiscal years 2013 through 2015, HUD proposed updating the funding formula to incorporate local housing costs and poverty rates. HUD has also proposed increasing the percentage of HOPWA grant amounts that may be used for administrative expenses from 3 percent to 6 percent of the grantee’s awarded amount. Ryan White Part A grantees, subgrantees, and HIV advocacy groups that we met with noted several strengths and weaknesses of the Ryan White HIV/AIDS program. For example, three of the four Ryan White Part A grantees we met with, as well as two HIV advocacy organizations, stated that the Ryan White HIV/AIDS program complemented the HOPWA program. Grantee staff told us that persons with HIV could receive temporary housing assistance through Ryan White Part A and then transition to permanent assistance through HOPWA. Also, members of two HIV advocacy groups with whom we met stated that local Ryan White Part A Planning Councils were beneficial because they identified the unique, local needs of persons with HIV. Some Ryan White Part A subgrantees and staff from an HIV advocacy group stated that the inability to use Ryan White Part A funds for permanent housing assistance created challenges. For example, the subgrantees told us that it was generally difficult to address all of the issues that their clients face, including substance abuse and mental illness, within the 2-year time frame. As previously noted, HRSA guidance encourages but does not require grantees to limit housing assistance to 24 months. Additionally, staff from an advocacy group told us that because the Ryan White Part A program could fund only temporary housing, recipients of this assistance were still faced with a lack of stable, permanent housing. Responding to the administration’s 2010 National HIV/AIDS Strategy, HUD and HRSA have made formal and informal efforts to collaborate by sharing information related to housing for persons with HIV. Coordination in the delivery of housing assistance to persons with HIV also occurs extensively at the local level, helping to ensure that the assistance provided by both programs is complementary and mitigates the potential for programs to provide duplicative services. Persons with HIV may be eligible to receive housing assistance from other federal programs, such as public housing. However, other programs may not be available and may not provide supportive services. The White House’s 2010 National HIV/AIDS Strategy and its Implementation Plan encourage coordination among federal agencies and between federal agencies and state, territorial, tribal, and local governments, to achieve a more coordinated response to HIV. To address the National HIV/AIDS Strategy, HUD, HRSA, and other federal agencies have taken several steps. First, they have participated in a federal interagency working group led by the White House Office of National AIDS Policy. According to the July 2010 National HIV/AIDS Strategy Federal Implementation Plan, the working group convened to review public recommendations, assess scientific evidence, and make recommendations related to the National HIV/AIDS Strategy. Additionally, in July 2013, an Executive Order established an HIV Care Continuum Working Group to coordinate federal efforts to improve outcomes nationally across the HIV care continuum. This group is co-chaired by the White House Office of National AIDS Policy and HHS. According to HRSA officials, in September 2014 an HIV Care Continuum Initiative meeting was held to examine best practices in implementing care continuum recommendations and to provide agencies with the opportunity to learn from each other. Staff from HRSA, HUD, and other agencies attended the meeting. We have found that collaboration is enhanced when common outcomes are defined, mutually reinforcing strategies are established, and roles and responsibilities are agreed upon, among other things. The efforts of HUD and HRSA to work together to help address the National HIV/AIDS Strategy suggest that they have taken steps to enhance collaboration. Second, HUD and HRSA have taken steps to share information. HRSA officials told us that, as required by statute, HHS issued a report to Congress in 2012 describing the coordinated efforts at the federal, state, and local levels to address HIV, including a description of barriers to HIV According to this report, between 2005 and 2008: program integration. HRSA worked with several federal agencies including HUD to examine case management models and examples of coordinated and collaborative case management guidelines; HRSA and HUD participated in the Interagency HIV/AIDS Case Management Workgroup to develop a set of guidelines around collaborative or coordinated case management services; and HUD and CDC collaborated in a study to examine housing assistance for homeless people with HIV to determine the impact of such assistance on the progression of their disease and the risk of transmitting HIV. HUD and HRSA officials with responsibility for the HOPWA and Ryan White HIV/AIDS programs told us that they had also met informally to share information and data on their grantees. For example, in June 2014 staff from both agencies met to discuss data collection that could be helpful to HUD in assessing the impact of the HOPWA program. During this meeting, HRSA also discussed the results of efforts that began in 2014 to identify HOPWA and Ryan White HIV/AIDS program grantees that collected both health and housing indicators. Additionally, HUD and HRSA are collaborating to provide both remote and onsite technical assistance to HOPWA grantees and project sponsors on improving program participants’ access to health care. We have found that collaboration is enhanced when two or more organizations engage in a joint activity that is intended to produce more public value than could be produced when the organizations act alone. HUD and HRSA also worked together to refine HRSA’s policy related to the length of time individuals can receive housing assistance through the Ryan White HIV/AIDS program. In 2008, HRSA issued a policy that imposed a 24-month cumulative cap on short-term and emergency housing assistance for recipients of Ryan White HIV/AIDS program housing assistance, to be effective beginning in March 2010. In consultation with HUD, HRSA rescinded this policy in February 2010 in response to feedback from Ryan White HIV/AIDS program grantees and others that the time limits could negatively impact recipients of the assistance. Ryan White Part A grantees with whom we met told us that their clients generally had both substance abuse and mental health issues that took time to address. They noted that 2 years was not always sufficient for someone to be able to move out of temporary housing. In May 2011 HRSA released a final notice that encourages, but does not require, grantees to limit assistance to 24 months. HUD’s efforts to work with HRSA on this housing policy are consistent with practices that we have found can enhance collaboration among federal agencies. Although some overlap exists between the HOPWA and Ryan White HIV/AIDS programs, different emphases and local coordination help to ensure that the programs complement rather than duplicate each other. HOPWA and the Ryan White HIV/AIDS program overlap in the areas of temporary housing and supportive services for persons with HIV, which both programs can fund. However, housing assistance for persons with HIV involves both housing- and health-related issues, and HUD and HRSA bring different types of expertise to these areas. HUD programs focus on the provision of housing assistance and HUD awards the bulk of federal housing-related resources. In contrast, HRSA’s primary focus is to provide health care for medically vulnerable people, among others. HRSA’s policy indicates that Ryan White HIV/AIDS program funds can be used for short-term or emergency housing only to the extent that such support is necessary for clients to gain or maintain access to medical care. Additionally, the Ryan White HIV/AIDS Treatment Extension Act of 2009 requires Ryan White HIV/AIDS program grantees to be the payer of last resort. In order to receive housing assistance through the Ryan White HIV/AIDS program, individuals must not have HOPWA or other forms of subsidized housing assistance available to them, even if they are eligible for the programs. However, they may receive Ryan White HIV/AIDS program assistance for other needs, such as medical care. The different program emphases and requirements helps prevent duplication between these programs. Coordination among local entities helps ensure that assistance provided by HOPWA and the Ryan White HIV/AIDS program complement each other and mitigates the potential for the programs to provide duplicative services. Coordination in the delivery of housing assistance to persons with HIV occurs at the local level through formal planning processes. As a condition of receiving a HOPWA grant, grantees must consult with other public and private entities, as well as local citizens, in implementing the HOPWA program and any other HUD Community Planning and Development grant funds that the community receives. Community Planning and Development grantees, including HOPWA grantees, contribute to the development of a consolidated plan and annual action plans. Through these plans, the grantees must describe the agencies, groups, and others who participated in the planning process; their consultations with social service agencies and other entities; and their activities to enhance coordination between public and assisted housing providers and private and governmental health, mental health, and service agencies. The Ryan White Part A program requires local planning councils to help facilitate coordination between Ryan White Part A and HOPWA grantees. As we have seen, the Ryan White HIV/AIDS Treatment Extension Act of 2009 requires planning councils to have members from various groups and organizations. For instance, at least one-third of the planning council members must be persons with HIV who receive Ryan White Part A services and are consumers who do not have a conflict of interest, meaning that they are not staff, consultants, or board members of Ryan White Part A ‐funded agencies. The planning council and the grantee work together to identify the needs of people with HIV and to prepare a comprehensive plan on how to meet those needs. Both the planning council and the grantee work together to make sure that other sources of funding work well with Ryan White HIV/AIDS program funds and that the Ryan White HIV/AIDS program is the payer of last resort. While the Ryan White HIV/AIDS Treatment Extension Act of 2009 does not require that the HOPWA program be represented on planning councils, there is a requirement that other federal HIV programs be represented on the council (which could include HOPWA). In addition, the 2015 Part A funding announcement and 2013 program manual both indicate that the planning council could include a HOPWA or housing service representative. Informal efforts to coordinate the delivery of housing assistance also help to reduce the potential for duplication. Staff from four Ryan White Part A subgrantees, which can provide clients with housing assistance for only a limited period of time, told us that they consistently reached out to local providers of subsidized housing. These providers may include other city agencies, nonprofit organizations, and owners of single-room occupancy hotels. Such coordination efforts could help to minimize the potential for program duplication. Coordination between the HOPWA and Ryan White Part A programs does not appear to require formal agreements and processes when the same local agency is the grant recipient of both programs. In two of the four cities we visited, the same city agency was both the formula HOPWA project sponsor and the Ryan White Part A grantee. As a result, coordination between the activities funded and efforts to move clients from temporary to permanent housing occurred through the agencies’ regular business practices. Officials from one of these city agencies stated that different staff members were dedicated to each program, but that they worked together and shared information related to clients’ needs and the services provided. Officials from another city agency said that the same city staff focused on both HOPWA and Ryan White Part A funds. In this case, the same staff member reviewed performance information and invoices from the local HOPWA sponsors and Ryan White Part A subgrantees. Persons with HIV may be eligible to receive housing assistance from other federal programs that are focused on assisting persons with low or no income, including the following: Public Housing provides housing aid for eligible low-income families, the elderly, and persons with disabilities. HUD administers this federal subsidy to participants of local public housing authorities that manage the housing for low-income residents at rents they can afford. The Housing Choice Vouchers program assists very low-income families, the elderly, and persons with disabilities. Participants may choose any housing that meets the requirements of the program and is not limited to units located in subsidized public housing projects. HUD administers the Housing Choice Voucher program, public housing agencies manage it. As we have seen, Continuum of Care is a HUD program that provides funding to nonprofit providers and state and local governments to quickly rehouse homeless individuals and families. Emergency Solutions Grant is a HUD program that provides funding to state and local governments for emergency shelters and services for homeless individuals and families. It also provides services to prevent families from becoming homeless. The HUD Veterans Affairs Supportive Housing program combines HUD’s Housing Choice Voucher rental assistance for homeless veterans with case management and clinical services provided by the Department of Veterans Affairs. Home Investment Partnerships Program is a HUD program that provides formula grants to states and localities to fund a wide range of activities, including building, buying, or rehabilitating affordable housing for rent or ownership or providing direct rental assistance to low-income people. While these programs have similar goals related to providing housing assistance, they have varying eligibility requirements (see table 3). For example, only homeless veterans are eligible for HUD-VASH, and an individual must be homeless or at risk of homelessness to be eligible for the Continuum of Care and Emergency Solutions Grant programs. Housing assistance programs that are not targeted to persons with HIV, such as Public Housing and the Housing Choice Voucher programs, may not be able to provide timely assistance because they may not be readily available. HOPWA and Ryan White Part A grantees from three of the cities we visited, as well as staff from six organizations that received funding from these grantees, told us that the local public housing agencies had very long waiting lists and sometimes closed their Public Housing and the Housing Choice Voucher programs to new applicants. Staff from one non-profit agency that receives both HOPWA and Ryan White Part A funding told us that they require recipients of HOPWA or Ryan White Part A housing assistance to apply for public housing and the Housing Choice Voucher programs. However, staff said the local public housing agency has a long waiting list for both types of housing, and thus the client would not likely be able to benefit from these programs. Also, two of the HOPWA grantees with whom we met told us that even though the local public housing agencies had set up a preference for homeless persons with HIV, these agencies made few units available through this preference system. According to officials from organizations that receive HOPWA and Ryan White Part A grant funds, housing assistance programs that are not targeted to persons with HIV, such as public housing and the Housing Choice Voucher programs, may not be appropriate because they are not required to provide supportive services. Table 4 shows the kinds of services these and other housing assistance programs provide, such as substance abuse or mental health counseling. While not required to do so, administrators of these programs may help individuals receive supportive services through other funding sources. HIV advocates and a researcher told us that providing housing assistance without necessary medical care or other types of supportive services may not effectively facilitate housing stability or improved health for persons with HIV. Several of the organizations that received funding from HOPWA or Ryan White Part A grantees told us that their clients generally had mental health and substance abuse issues and would not thrive without intensive counseling. While some public housing agencies may offer their public housing residents access to a case manager or a staff member who can help the resident obtain the services that they need, public housing agencies are not required to offer this service. Additionally, HIV positive persons with criminal records or who engage in criminal activity may not be eligible for public housing and HCVs.HOPWA and Ryan White HIV/AIDS programs can provide housing assistance to persons with HIV who have criminal records. HUD field office staff use a risk-based process to guide their monitoring of grantees and provided evidence that they had implemented these procedures. HRSA headquarters staff with primary responsibility for monitoring Ryan White HIV/AIDS program grants have taken steps to improve their efforts in recent years. Both HUD and HRSA collect data from HOPWA and Ryan White HIV/AIDS program grantees, respectively, including data on the activities funded and clients’ housing status (i.e., whether they have stable and permanent housing). HUD summarizes the data it collects but does not evaluate year-to-year changes in unmet housing need for individual grantees. HRSA staff with primary responsibility for monitoring Ryan White Part A grantees assess whether grantee data are submitted to HRSA on time but are not required to review the housing-related data submitted. As a result, both programs may be missing opportunities to use existing data to manage the programs. HUD’s field office staff have primary responsibility for monitoring HOPWA grantees, and we found that they were generally following monitoring policies for the four grantees that we visited. Field staff are responsible for conducting annual risk assessments of all Community Planning and Development grantees, which include recipients of HOPWA grants. To conduct these assessments, field staff must adhere to Risk Analysis Policy Notices and rate each grantee based on specific factors, including financial factors, the physical condition of projects, and staff capacity, among others. HUD field office staff use these factors to assess the risk level for each grantee and assign a numeric score. Grantees with risk assessments above a certain threshold are to receive onsite monitoring, unless the local HUD field office determines that the grantee can be excepted on the basis of additional HUD criteria and consideration of the field office’s travel and staffing resources.meet with HOPWA sponsor staff and review documentation related to the sponsor’s implementation of the program. HUD staff may identify findings that the sponsor is required to address. In conducting site visits, HUD staff are required to follow specific monitoring guidance related to the HOPWA program. During site visits, HUD staff HUD has documented that it conducted risk assessments and onsite monitoring visits for formula and competitive HOPWA grantees from fiscal years 2008 through 2013. For the four formula HOPWA grantees we visited, HUD’s field office staff conducted 24 risk assessments— one assessment per year for each of the four HOPWA grants from 2008 through 2013. Nine of the 24 assessments indicated that the HOPWA grant met HUD’s criteria for triggering onsite monitoring. HUD field office staff subsequently conducted onsite monitoring for six of these nine grantees. For the three HOPWA grantees that HUD did not visit for onsite monitoring, the local HUD field office either did not have the resources to conduct the review or the site visit was excepted because the grantee had received a site visit within the previous 2 years, according to HUD. HUD headquarters monitors HOPWA grantees’ compliance with the requirement to submit annual performance reports—the CAPER for formula grantees and the Annual Performance Report for competitive grantees. These reports include information on the activities funded, client characteristics, and outcomes related to housing stability, homelessness, and access to care and support. According to HUD officials and contractor staff, a contractor sends HOPWA grantees reminders prior to report deadlines, tracks receipt of the reports, and reviews the reports for completeness and internal consistency. HUD’s contractor also tracks the timeliness of the initial submissions of performance reports. According to the contractor’s data, 93 percent of the CAPERs and Annual Performance Reports for program year 2013 were submitted within 30 days of their due date. HUD’s contractor staff told us that they assisted grantees with any technical difficulties or internal inconsistencies until the report was submitted and met the contractor’s standards for reliability. HRSA headquarters staff have primary responsibility for routine and onsite monitoring of Ryan White HIV/AIDS program grantees. Routine monitoring includes regularly scheduled phone calls and reviews of grantee reports. The purpose of routine monitoring is to assess grantees’ performance and compliance with statutory requirements, regulations, and guidance. HRSA staff are also responsible for conducting site visits with the grantees. Site visits are intended to provide an opportunity to review the grantee’s program and may serve as a technical-assistance session for the grantee. HRSA guidance states that site visits should be viewed as an opportunity to expand on information grantees have provided in their grant application, reports, and conference calls. During site visits, HRSA staff meet with grantee staff and may meet with staff from one or more of the subgrantees to obtain feedback on how the program is functioning. HRSA staff may also visit various locations at which subgrantees deliver services and review grantee and subgrantee program documentation. HRSA staff with responsibility for the four Ryan White Part A grantees we visited reviewed risk-related information, conducted monthly monitoring calls, and provided technical assistance. HRSA staff reviewed single audit documentation, including risk-related information. Two of the four risk assessments indicated that the grantees had no major issues, and the other two showed deficiencies with internal controls. For the latter two, HRSA determined that these issues did not warrant a restriction in HRSA funding. HRSA staff also conducted monthly calls to grantees and summarized the discussions in electronic files. Additionally, HRSA staff provided technical assistance to Ryan White Part A grantees. For example, in 2013 HRSA arranged for a consultant to provide on-site technical assistance to one of the Part A grantees that we visited. HRSA has increased onsite monitoring visits for Ryan White HIV/AIDS program grantees in response to our past recommendations. Specifically, our June 2012 report found that HRSA did not have written guidance describing its policy for selecting grantees to visit and did not prioritize site visits in the manner described to us. Moreover, 44 percent of all grantees did not receive a site visit from 2008 through 2011. We recommended, among other things, that HRSA develop a strategic, risk- based approach for selecting grantees for site visits to ensure that the visits were made at regular and timely intervals. HRSA addressed this recommendation by developing a risk-based approach for selecting grantees for site visits. Additionally, beginning in 2012, HRSA implemented a policy that all Part A and Part B grantees would receive site visits at least once every 5 years and more often if needed. According to our analysis of HRSA’s Part A site visits through 2013, HRSA staff conducted site visits to 11 of the 13 Part A grantees that had not been visited from 2008 through 2012. Additionally, 32 of 53 Eligible Metropolitan Areas and Transitional Grant Areas received a comprehensive site visit between July 2012 and July 2013. HRSA has taken additional steps to address four other recommendations we made in 2012 to improve oversight of Ryan White HIV/AIDS program grantees. As of October 2014, all four of these recommendations had been implemented. The steps taken include the following: improved the functionality of an information system, the Electronic Handbook, to enable staff to better document their oversight and monitoring activities, including monthly calls, emails, and technical assistance; assessed, revised, and updated records management policies for HRSA staff with primary responsibility for monitoring grantees; created updated program manuals and posted the manuals on HRSA’s technical assistance website; and updated its monitoring standards and worked with grantees that faced challenges with implementing the standards. Additionally, HRSA grantees are responsible for monitoring subgrantees, which are the organizations that grantees contract with to provide services to persons with HIV. In 2011 HRSA developed National Monitoring Standards for Parts A and B of the Ryan White HIV/AIDS program. These standards are designed to help Ryan White Part A and Part B grantees meet federal requirements for program and fiscal management, monitoring, and reporting. The standards were developed because of the need to establish specific standards governing the frequency and nature of grantee monitoring of subgrantees and create a clear role for HRSA staff in monitoring grantee oversight of subgrantees. HRSA staff with whom we met told us that they used these standards and expected grantees to use them to monitor subgrantees. HUD headquarters staff collect annual performance data from HOPWA grantees on activities funded; client characteristics; and outcomes related to housing stability, access to health care, and unmet housing need. As noted earlier, HUD uses this information to create “performance profiles”—two-page summaries of this information—for each HOPWA grantee for each program year. Additionally, HUD creates annual performance profiles for the formula HOPWA program, the competitive HOPWA program, and both programs combined. Profiles are not cumulative—that is, they do not show the total number of clients served up to a point in time. Rather, the profiles provide data on the clients served during the previous program year. A HUD contractor posts all of the performance profiles on a HUD website. HUD contractors are responsible for collecting Annual Performance Reports and CAPERs and using the data grantees report to create performance profiles. The contractors review the data for completeness and follow up with grantees regarding inconsistencies. According to HUD, its contractors also identify and document inconsistencies in data using current and previously submitted data for four areas: access to care, cost per unit, stability, and administrative costs. The contractors also document efforts to clarify and correct data related to these issues. However, HUD’s contractors told us that they do not compare current- year data to prior-year data for unmet housing need. In addition, HUD field office staff with whom we met stated that they did not compare grantee data from year to year to identify any potential data reporting errors. Our analysis of the unmet housing need data collected through CAPERs from 2010 through 2013 found that some formula grantees reported significant changes in the number of HOPWA-eligible persons with an unmet housing need. For example, HUD data for 2012 indicated that 47 percent of the grantees reported changes of 30 percent or more in the number of persons with an unmet housing need compared with 2011 numbers. According to HUD’s data, in 2011 one grantee had 145 persons with HIV with unmet housing needs and 525,957 in 2012. Although changes in these estimates could be the result of increases or decreases in the need for housing assistance for persons living with HIV, large annual changes could also signal reporting errors. This and other examples are shown in table 5. HUD headquarters officials told us that the dramatic differences could be the result of a change in the methodology used to report the figure, staff turnover among grantees, or changes in grantee capacity. Prior to our review, HUD officials had not followed up with grantees that had reported significant changes in unmet need between 2010 and 2013. In response to our review, HUD officials determined that one of the significant changes in unmet housing need from year to year was the result of a data entry error made by HUD’s contractor. Although HUD staff have requirements for reviewing the accuracy of CAPER and Annual Performance Reports, the requirements do not contain specific instructions for assessing performance data over time. Federal internal control standards state that monitoring should assess the quality of performance over time and that activities need to be established to monitor performance measures and indicators. These controls could call for comparisons and assessments so that relationships can be analyzed and appropriate actions taken. has shown, leading organizations use performance information to identify gaps in performance, improve organizational processes, and improve their performance. By not analyzing trends in the unmet housing need data grantees are required to report, HUD may be missing opportunities to identify and address problems in grantee reporting. Moreover, by not following up on significant changes in the unmet housing need data submitted, HUD may be missing indications that these data for the program as a whole may not be reliable. GAO/AIMD-00-21.3.1. Although HRSA headquarters staff conduct routine monitoring of Ryan White HIV/AIDS program grantees, they do not focus on housing information. HRSA staff are responsible for overseeing Ryan White HIV/AIDS program grantees by routinely monitoring grantees’ performance and compliance with statutory requirements, regulations, and guidance. Routine monitoring includes regularly scheduled monitoring calls, reviews of grantee reports, and the provision of technical assistance to grantees. If during the course of routine monitoring HRSA staff find that a grantee has not met its program or financial requirements, the staff are responsible for determining whether the grantee requires more intensive monitoring. According to HRSA officials, agency staff with responsibility for monitoring can use resources like the National HIV/AIDS Strategy indicators to help grantees assess clients’ ability to access HIV care and treatment. HRSA staff are also responsible for monitoring any special conditions that are put in place. HRSA staff with responsibilities related to monitoring are the agency’s primary contact with grantees, and they are to communicate with their assigned grantees at least monthly. HRSA’s routine monitoring efforts for the Ryan White HIV/AIDS program do not focus on housing assistance. For example, monthly monitoring calls between HRSA staff and grantees generally follow a standard agenda, and housing is not an agenda item. According to HRSA officials, housing is included when matters pertaining to housing assistance need to be discussed. Also, according to HRSA’s 2011 Housing Policy, Ryan White HIV/AIDS program grantees must provide an individualized written housing plan to HRSA staff if they request one. The plan must cover each client who is receiving short-term, transitional, or emergency housing services. However, the four HRSA staff members we visited who had responsibility for monitoring the grantees told us that they had never requested or reviewed individualized housing plans. According to HRSA officials, documents related to housing are reviewed only if housing needs are identified as a priority by the grantee and HRSA staff. In addition, while HRSA staff are responsible for monitoring grantee reports, including whether RSR reports are submitted to HRSA on time, they are not required to review or monitor the housing-related data submitted in these reports. As noted earlier, federal internal control standards state that activities need to be established to monitor performance measures and indicators. These controls could call for comparisons and assessments so that analysis of the relationships can be made and appropriate actions taken. Controls should also be aimed at validating the integrity of performance indicators. In addition, our previous body of work has demonstrated the importance of using performance management indicators for various management activities and decision making. We have previously found that there are five leading practices that can enhance or facilitate the use of performance information: (1) aligning agency-wide goals, objectives, and measures; (2) improving the usefulness of performance information; (3) developing agency capacity to use performance information; (4) demonstrating management commitment; and (5) communicating performance information frequently and effectively. HRSA staff with responsibility for monitoring grantees stated that they did not focus their monitoring efforts on housing because the primary focus of the program was medical care and because grantees spend a small portion of their grant funds on housing assistance. However, as previously noted the National HIV/AIDS Strategy emphasizes the importance of stable housing as a means of improving access to medical care for persons with HIV. The strategy states that access to housing is an important precursor to getting many people into a stable treatment regimen and emphasizes the importance of policies that promote access to housing. By not focusing attention on the housing data that grantees are required to report, such as housing status, HRSA staff with responsibility for program monitoring may be missing an opportunity to improve their management of grantees’ performance. Among other things, they may not be monitoring an important indicator in the National HIV/AIDS Strategy—the extent to which grantees are contributing to housing stability for persons with HIV. HIV continues to pose a serious health threat even with advances in medicine. In order to manage programs that provide housing assistance for persons with HIV, agencies need to have reliable data and effective practices for using that data to manage program performance. First, HUD’s estimate of HOPWA-eligible individuals with an unmet housing need is based on data that HOPWA grantees develop using varying methodologies. While HUD advises grantees to use one or more of seven specific data sources, HUD does not require grantees to use these sources in a consistent and therefore comparable manner, as suggested by federal internal control standards and our work on data reliability. HUD has taken steps toward developing a standard methodology but has not established time frames for finalizing these efforts. As a result, the usefulness of HUD’s overall estimate is not clear. Furthermore, Congress may not have a complete understanding of the continuing need for programs that provide housing assistance to persons with HIV. Second, the funding provided under HOPWA has filled important gaps in the availability of affordable housing in communities throughout the country. However, the current statutory formula for HOPWA has not kept pace with the changing nature of the disease. Congress recognized this shift in the 2000, 2006, and 2009 reauthorizations of the Ryan White HIV/AIDS program that required HIV case counts to be used in the distribution of Ryan White HIV/AIDS program funds. While we recognize that it may not be appropriate to use precisely the same formula for both HOPWA and the Ryan White HIV/AIDS program, the rationale for allocating funds on the basis of those currently living with HIV applies to both grant programs. Because HOPWA funds are awarded based on cumulative AIDS cases, these funds are not being targeted as effectively or equitably as they could be. Third, HRSA relies on housing data reported by Ryan White HIV/AIDS program grantees to report on its progress in addressing one of the goals of the National HIV/AIDS Strategy but does not require grantees to ensure that these data are current. Internal control standards for the federal government state that events should be promptly recorded to maintain their relevance and value to management in controlling operations and making decisions. Without taking steps to ensure that grantee-reported housing status data are current, HRSA may not have reliable information to use in reporting on the extent to which Ryan White HIV/AIDS program clients are reaching the National HIV/AIDS Strategy goals for attainment of permanent housing. We also found that HUD had not optimized its use of the performance information it required HOPWA grantees to collect. While HUD has processes in place to review the completeness and internal consistency of each grantee’s annual data submission, HUD has not established specific procedures to compare the unmet housing need data individual grantees submit from year to year. The extent to which persons with HIV have an unmet housing need speaks to the continuing need for the HOPWA program. Reported data on unmet housing need may vary significantly, and HUD does not have steps in place to determine if the local unmet housing need has changed or whether the grantee may need technical assistance. Without a specific process to make comparisons among the unmet housing need data individual grantees submit from year to year, in accordance with federal internal control standards, HUD may not be able to ensure that significant changes are identified and addressed if necessary. Finally, HRSA has missed opportunities to help ensure that HRSA staff are using all available tools to effectively monitor grantee performance related to housing. While housing is not the primary objective of the Ryan White HIV/AIDS program, stable housing is critical to the health of persons with HIV, as HHS has acknowledged. Internal controls for the federal government note that activities need to be established to monitor performance measures and indicators. Moreover, we have reported on the importance of using performance management indicators for management activities and decision making. Without requiring HRSA staff with monitoring responsibility to review the housing data that individual Ryan White Part A grantees submit, HRSA may not be able to proactively identify performance issues, including the extent to which individual grantees are contributing towards housing stability. If Congress wishes HOPWA funding to more closely account for the current impact of the HIV, it should consider revising the funding formula used to determine grantee eligibility and grant amounts to reflect a measure of persons living with HIV, including those with AIDS. We make the following four recommendations: To improve information on the unmet housing needs of persons with HIV and follow through on its efforts to develop a standard methodology, we recommend that the Secretary of HUD direct the Assistant Secretary for Community Planning and Development to require grantees to use comparable methodologies to analyze HUD’s recommended data sources on unmet housing need. In order to improve the reliability of the housing data HRSA collects from Ryan White HIV/AIDS program grantees, we recommend that the Administrator of HRSA require program grantees that provide housing assistance to reflect each client’s current (within the previous 12 months) housing status in the client-level housing status data that they report to HRSA. To help ensure that HUD is using grantee performance data to identify and address any irregularities or issues in grantee reporting, we recommend that the Secretary of HUD direct the Assistant Secretary for Community Planning and Development to develop and implement a specific process to make comparisons between the unmet housing need data submitted by individual grantees from year to year, including a process to follow up with grantees when significant changes are identified. In order to promote the use of housing assistance data to monitor program performance, we recommend that the Administrator of HRSA require the HRSA staff who have primary responsibility for monitoring Ryan White HIV/AIDS program grants to monitor indicators of grantees’ performance in contributing towards housing stability, an HHS-identified indicator of HIV care. We provided a copy of this report to HUD and HHS for their review. In its written comments, which are reprinted in appendix III, HUD agreed with one of the two recommendations directed toward it and expressed concerns about the report’s description of the agency’s use of grantee data. In its written comments, which are reprinted in appendix IV, HHS agreed with both of our recommendations. HUD agreed with our recommendation that it require HOPWA grantees to use comparable methodologies to analyze HUD’s recommended data sources on unmet housing need. However, the agency said that our report did not acknowledge the agency’s efforts to provide further guidance to communities beginning in the first quarter of fiscal year 2014. We requested documentation of such efforts, but HUD was unable to provide it. Our report notes that the Consolidated Annual Performance and Evaluation Reports (CAPER reports) describe the data sources that grantees can use to estimate unmet need. Our report also acknowledges an October 2014 meeting between HUD, stakeholders, and HOPWA grantees to discuss identifying and reporting on unmet housing need as well as HUD’s efforts to work with a contractor to develop a standard methodology. While these efforts are helpful steps toward developing a standard methodology, HUD does not have specific goals or time frames for finalizing this methodology. HUD disagreed without our recommendation that it develop and implement a specific process to make comparisons between the data submitted by individual grantees from year to year, including a process to follow up with grantees when significant changes are identified. In its written response, HUD stated that the agency already conducts this type of analysis with contractor support. More specifically, HUD stated that data analysis is conducted using current and previously submitted data. However, HUD’s documentation of the contractor’s grantee-level analysis indicates that its trend analysis is focused on four areas: access to care, cost per unit, stability, and administrative costs. HUD’s documentation of its contractor’s analysis of data trends among formula grantees does not include other data elements collected through CAPER reports, including unmet housing need. Moreover, during the course of our review, HUD’s contractors told us that they do not assess grantee-level, year-to-year changes in unmet housing need. Based on our analysis of unmet housing need data collected from CAPER reports from 2010 through 2013, we found that some formula grantees reported significant changes in unmet housing need from year to year. As noted in the report, in response to our review HUD determined that its contractor had made data entry errors in some cases. In other cases, HUD had not followed up with the grantee and stated that dramatic differences could be attributed to a variety of causes, including grantee staff turnover or changes in grantee capacity. In addition, staff from the four HUD field offices we visited told us that they review CAPER reports but do not compare the information grantees report from year to year. We revised our recommendation to clarify that we are recommending that HUD analyze year-to-year trends in the unmet housing need data that individual grantees submit. HUD also agreed with our matter for congressional consideration. Specifically, HUD agreed that HOPWA funds are not being targeted as effectively or equitably as they could be, based on the outdated HOPWA statute. HUD noted that it has continued to seek congressional action on a legislative proposal, which includes statutory changes that reflect advances in both HIV health care and surveillance. Our report acknowledges HUD’s efforts by discussing its proposal for updating the formula in its last three budget justifications. In its general comments, HUD stated that the introductory part of the draft report (highlights page) would benefit from a more balanced approach to the discussion of the HOPWA program’s strengths and weaknesses. The report discusses the strengths of the HOPWA program as part of one of our research objectives. Additionally, the section of the report that focuses on coordination describes HUD’s and HRSA’s efforts to collaborate with one another and provides examples of formal and informal coordination at the local level to avoid providing duplicative services. We also revised our highlights page to note that HUD has taken steps toward developing a standard methodology for grantees to use to assess unmet housing needs. In its letter, HUD also provided technical comments, which we addressed as appropriate. HUD disagreed that it uses unmet housing need data to justify its HOPWA budget request and to assess the performance of the program. Regarding the first part of this statement—that HUD uses unmet housing need data to justify its HOPWA budget request—we did not make a change to the characterization of HUD’s use of the data in its budget requests, and our analysis of HUD’s budget requests supports our characterization. While HUD’s technical comments characterized the agency’s use of unmet need data in its budget requests as an anecdotal data point, HUD uses this information to justify the continuing need for the program. As an example, HUD’s 2015 budget request notes that 131,164 HIV-positive households had unmet housing needs in the portion of the budget request that describes why the program is necessary. Regarding the second part of the statement with which HUD disagreed—that HUD uses unmet housing need to assess the performance of the program—we revised the report to state that HUD uses unmet housing need data for reporting on the performance of the program, rather than assessing the performance of the program. Specifically, the agency reports this information to the public not only through budget justification documents, but also through individual grantee and program-wide performance reports. HUD also disagreed with the statement that the agency does not require HOPWA grantees to use a consistent methodology to calculate unmet need, and noted that formula grantees are required to report this need through CAPER reports. Our analysis of CAPER report guidance and grantees’ implementation of this guidance supports our characterization. As described in the report, according to CAPER guidance formula HOPWA grantees can use one or more of seven data sources to calculate unmet need, including housing providers’ waiting lists. However, HUD does not provide additional guidance on how these sources should be analyzed. As a result, grantees could use different methods for analyzing the same data sources. The report provides examples of how HOPWA grantees we interviewed use different methodologies to calculate unmet housing needs. HUD also disagreed with the statement that agency officials had not followed up with grantees that had reported significant changes in unmet housing needs between 2010 and 2013, and stated that contracted support plays a role in the review and analysis of HOPWA data. Our report acknowledges contractors’ efforts to review HOPWA data for completeness and follow up with grantees regarding inconsistencies. However, our work supports our description of HUD’s efforts to follow up with grantees that reported significant changes in unmet needs between 2010 and 2013, and therefore we did not make changes. As an example, our analysis of the unmet need data grantees reported to HUD found that one grantee reported an unmet need of 145 persons in 2011 and 525,957 persons in 2012. HUD did not research this anomaly until presented with our analysis. Furthermore, the documentation HUD provided of its follow- up efforts with grantees did not include information about unmet housing need data. HHS agreed with our recommendation that HRSA require program grantees that provide housing assistance to reflect each client’s current (within the previous 12 months) housing status in the client-level housing data that they report to HRSA. In its written comments, HHS also stated that HRSA does require Ryan White HIV/AIDS program grantees to maintain current clients’ housing status. As we discuss in the report, HRSA requires grantees to report data on clients’ housing status to HRSA every year. However, during the course of our review, HRSA officials told us that the frequency with which this information is updated is determined at the local level, and we found that this information may not be current. In its written comments, HRSA stated that it will update data instructions and provide a webinar for HRSA monitoring staff and Ryan White HIV/AIDS program grantees to help ensure that grantees are collecting data consistently and correctly. These actions, if implemented effectively, would address the intent of our recommendation. HHS also agreed with our recommendation that HRSA staff who have primary responsibility for monitoring Ryan White HIV/AIDS program grants monitor indicators of grantees’ performance in contributing towards housing stability. HHS noted that HRSA had taken steps to provide monitoring staff with reports that show grantee-level data and HHS indicators. According to HHS, these reports support the monitoring of performance indicators, including housing status. Additionally, HHS stated that monitoring staff have begun to be trained on how to interpret these data. These are positive steps that should help HHS to more effectively monitor individual grantees’ contributions towards housing stability. HHS also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Housing and Urban Development, the Secretary of Health and Human Services, and interested congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or garciadiazd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are listed on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Our objectives were to discuss (1) the need for housing assistance for persons with the human immunodeficiency virus (HIV) and the extent to which federal assistance reaches communities in need; (2) the results that have been achieved through federal programs that provide housing assistance for persons with HIV and what is known about the strengths and weaknesses of these programs; (3) the extent to which federal programs that provide housing assistance and supportive services for persons with HIV coordinate with one another; and (4) the extent of federal oversight of programs that provide housing assistance to persons with HIV. To identify information on the housing needs of persons living with HIV, we obtained and reviewed available data from the Department of Housing and Urban Development (HUD) on the unmet housing needs of HOPWA- eligible persons for program years 2010 (the earliest year for which HUD considered the data to be reliable) through 2013 (program year refers to grantees’ fiscal years, which may vary from the federal fiscal year). To assess the reliability of this information, we interviewed HUD officials, conducted electronic testing of the data to identify outliers as well as missing or duplicated data, and interviewed grantees of HUD’s Housing Opportunities for Persons with AIDS (HOPWA) program. We compared HUD’s methodology for calculating unmet housing need to internal control standards for the federal government, as well as GAO guidance on preparing reliable data. We determined that HUD’s unmet housing need data were not sufficiently reliable for the purposes of estimating the number of HOPWA-eligible individuals with an unmet need because they were based on data developed by HOPWA grantees using inconsistent methodologies. We also analyzed the Centers for Disease Control and Prevention’s (CDC) fiscal year 2012 HIV surveillance data—the most recent data available at the time of our review—to identify and describe geographic trends in persons living with diagnised HIV infections as well as the demographic characteristics of persons diagnosed with HIV.the reliability of this information, we interviewed CDC officials and reviewed documentation of CDC’s methodology for collecting the data. We determined that the data were sufficiently reliable for the purpose of describing trends in HIV infection. To determine whether the Health Resources and Services Administration (HRSA) assessed the number of HIV-infected persons that might need emergency housing assistance, we reviewed HRSA guidance and interviewed HRSA officials. In addition, we reviewed requirements for Ryan White Planning Councils to assess local needs for HIV-related services. To identify the federal programs that provide housing assistance specifically for persons with HIV, we reviewed Congressional Research Service, GAO, HUD, and HRSA reports issued from 1997 through 2014 on housing for persons with HIV and interviewed HUD and HRSA officials. For HRSA’s Ryan White HIV/AIDS program, we focused on Part A because it can fund housing assistance; because Part A grantees expended significantly more of their funding on housing assistance than Part B grantees in 2011, and because, like HOPWA grants, Part A grants are generally awarded to local governments. The MSA delineations are based on the 2000 Office of Management and Budget Standards for Delineating Metropolitan and Micropolitan Statistical Areas (implemented in 2003). of March 2011. This approach helped ensure that the two data sets were comparable to one another and corresponded to the data that would have been available in fiscal year 2012. These data were not adjusted for reporting delays. According to the CDC, estimates of persons living with HIV (i.e., HIV prevalence data) in a given year are generally more accurate when at least 12 months have elapsed since the end of the measurement period, as both diagnoses and deaths are often subject to reporting delays. The specific direction of any bias is unclear and may vary by jurisdiction. For each MSA, we calculated the absolute relative difference between cumulative AIDS cases and the number of cases of persons living with HIV (including AIDS). Additionally, we identified examples of MSAs that had similar numbers of persons living with HIV but received notably different amounts of HOPWA formula funds for fiscal year 2012. We also compared the current HOPWA funding formula to our previous work that addressed funding grants based on cumulative AIDS cases, including the deceased. To describe HUD’s proposed changes to the HOPWA funding formula, we reviewed HUD’s congressional budget justifications for fiscal years 2013, 2014, and 2015. GAO, Housing: HUD’s Program for Persons with AIDS, GAO/RCED-97-62 (Mar. 24, 1997) and HIV/AIDS: Changes Needed to Improve the Distribution of Ryan White CARE Act and Housing Funds, GAO-06-332 (Feb. 28, 2006). received both HOPWA and Ryan White Part A grants. We used HRSA’s 2011 Ryan White HIV/AIDS program expenditure data to identify grantees that had spent Ryan White Part A funds on housing assistance. We compared the locations of the Ryan White Part A grantees that had funded housing assistance to locations of the formula HOPWA grantees and selected four cities that had both. We based our selection on grant size (i.e., grant amounts at either the higher end or middle of the range in fiscal year 2011), the presence of Ryan White Part A grantees that had expended Ryan White Part A funds on housing assistance, and geographic diversity. Based on this analysis, we selected New York City, New York; New Orleans, Louisiana; San Francisco, California; and St. Louis, Missouri. In each city, we interviewed officials from the formula HOPWA grantees and Ryan White Part A grantees; one or more HOPWA project sponsors; one or more Ryan White Part A subgrantees; the local HUD field office; the local Continuum of Care grantee; and at least one HIV advocacy organization. We selected HOPWA project sponsors and Ryan White Part A grantees based on discussions with grantee staff and selected advocacy organizations based on information from a national HIV advocacy organization about active local HIV advocacy organizations. We also toured housing that was funded through formula HOPWA funds or Ryan White Part A in each city, including emergency housing, a permanent housing facility, and a hospice, to see how the funds had been used. To obtain views on the impact of the HOPWA and Ryan White HIV/AIDS Programs in rural areas, we also interviewed the State AIDS Directors for California, Louisiana, Missouri, and New York. To determine the results that have been achieved through federal programs that provide housing assistance to persons with HIV, we obtained and analyzed HOPWA data on how funds were used and client characteristics for program years 2009 through 2012. To assess the reliability of the HOPWA data, we interviewed HUD officials and contractors that had responsibility for processing information about their data reliability procedures. We also conducted electronic testing for missing data, outliers, or obvious errors. We found that most data were reliable for the purposes of describing how funds were used and identifying the characteristics of the persons who benefitted from housing assistance. As previously noted, we found that HUD’s data on unmet housing need were not sufficiently reliable for our purposes. For the Ryan White HIV/AIDS program, we obtained and reviewed Ryan White HIV/AIDS Program Services Report (RSR) data for Part A, and for fiscal years 2010 through 2012. Agency officials told us that 2009 data were only available in aggregate form and not by Part A grantee. To assess the reliability of HRSA’s Ryan White Part A data related to housing assistance, we reviewed HRSA guidance and policies, interviewed HRSA officials with responsibility for processing the data, interviewed four HRSA Program Officers, and conducted electronic testing. We also compared HRSA’s methodology for calculating the percentage of Ryan White HIV/AIDS program clients who had stable housing to internal control standards for the federal government. Because HRSA does not require grantees to regularly update each client’s housing status, we determined that housing status data were not sufficiently reliable for our purposes. Also, we obtained and analyzed expenditure data for both programs. For HOPWA and Ryan White Part A, the most recent years of expenditure data were 2012 and 2011, respectively. For HOPWA, we analyzed program data on activities funded (e.g., housing assistance, housing development, supportive services); types of housing assistance funded (e.g., tenant-based rental assistance, permanent facilities); and demographic characteristics (e.g., sex, race, ethnicity, age, income). For the Ryan White HIV/AIDS program, we analyzed RSR data on the number and proportion of clients who received housing assistance through Part A. For those clients who did receive housing assistance, we analyzed demographic characteristics (sex, race, ethnicity, age, earnings relative to the federal poverty level). To describe the strengths of the HOPWA and Ryan White Part A programs, as well as any weaknesses associated with these programs, we reviewed program requirements; identified studies through a search of various databases using keywords such as “HOPWA” and “Ryan White”; and interviewed a purposive sample of program grantees, HOPWA project sponsors, and Ryan White Part A subgrantees. We also interviewed HIV advocates, HUD and HRSA officials with responsibilities related to the HOPWA and Ryan White HIV/AIDS programs, and an academic researcher on HIV and housing who had co-authored various articles on housing for persons with HIV in New York City. Upon completion of our initial search, we identified eight studies that discussed the effects of housing assistance programs on persons with HIV. We reviewed the studies’ methodology, limitations, and conclusions for the purposes of excluding studies that did not ensure a minimal level of methodological rigor and excluded two studies. Of the six remaining studies, two were randomized control trial studies, one was a cross- sectional study, and one used a quasi-experimental design. Two had weaker research designs but were retained since they were sufficiently rigorous and, given the limited number of empirical studies on this subject, provided useful information on the importance of access to housing for medical outcomes for people living with HIV. To assess the extent to which the HOPWA and Ryan White Part A programs coordinated with each other at the federal level, we identified program requirements in the governing legislation for the HOPWA and Ryan White HIV/AIDS programs. We also obtained and reviewed documentation of HUD’s and HRSA’s efforts to coordinate with each other, interviewed HUD and HRSA officials about these efforts, and compared the efforts to GAO’s criteria related to coordination and program overlap. GAO, Housing Assistance: Opportunities Exist to Increase Collaboration and Consider Consolidation, GAO-12-554 (Washington, D.C.: Aug. 16, 2012) and Housing Assistance: An Inventory of Fiscal Year 2010 Programs, Tax Expenditures, and Other Activities, GAO-12-555SP (Washington, D.C.: Aug. 16, 2012), an E-supplement to GAO-12-554. the list of housing programs. For the five programs, we compared their primary goals, client eligibility requirements, requirements related to supportive services, and the specific types of housing assistance that could be provided. We also discussed whether and how HOPWA and Ryan White Part A grantees coordinated with these programs during our site visits to the purposive sample of cities. Additionally, we reviewed the Catalog of Federal Domestic Assistance program descriptions, program information from each program’s website, and prior GAO reports to determine each program’s size, administering agency, and assistance type. Finally, we interviewed HIV advocacy groups, HOPWA and Ryan White Part A grantees, HUD and HRSA officials, and an academic researcher about housing assistance and services for persons with HIV. To assess HUD and HRSA’s monitoring and oversight efforts, we identified and reviewed their monitoring policies, procedures, and guidance. We also interviewed HUD headquarters and field office staff with responsibilities related to HOPWA grantee monitoring, as well as HRSA staff who had primary responsibility for monitoring Ryan White Part A grantees. We compared HUD’s risk assessment policies for program years 2008 through 2013 to documentation on the implementation of these procedures for the four HOPWA grantees we visited, including documentation of risk assessments and site visits conducted. For the Ryan White HIV/AIDS program, we reviewed the status of five previously issued GAO recommendations related to program monitoring and oversight and summarized HRSA’s efforts to address these recommendations. We also analyzed updated HRSA data on Part A site visits conducted in 2012 and 2013. Additionally, we interviewed both HUD and HRSA officials on how they use performance data to monitor HOPWA and Ryan White Part A grantees. For HOPWA, we reviewed documentation of HUD’s use of performance data for program years 2009 through 2013. For the Ryan White HIV/AIDS program, we reviewed published reports on the agency’s use of housing-related performance data. We compared HUD and HRSA’s monitoring efforts to federal internal control standards as well as practices that leading organizations used related to managing for results. We conducted this performance audit from March 2014 to April 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on the audit objectives. In 2012, the Department of Housing and Urban Development (HUD) awarded formula Housing Opportunities for Persons with AIDS (HOPWA) grants to 78 metropolitan statistical areas (MSA), with the most populous city in each area serving as that area’s formula HOPWA grantee. Formula grant funding criteria are based on each MSA’s share of cumulative Acquired Immune Deficiency Syndrome (AIDS) cases. Table 6 shows the number of cumulative AIDS cases, the number of persons living with human immunodeficiency virus (HIV), and the relative difference between these two numbers for each MSA. In addition to the contact named above, Paul Schmidt, Assistant Director; Lisa M. Moore, Analyst-in-Charge; Imoni Hampton, John McGrail, John Mingus, Roberto Pinero, Jennifer Schwartz, and Jena Sinkfield made key contributions to this report.
Over 1.2 million people in the United States are estimated to have HIV, and about 50,000 new infections occur each year. Research has shown that persons with HIV who lack stable housing are less likely to adhere to HIV care. HUD's HOPWA program and HRSA's Ryan White program provide grants to localities that can be used to fund housing and supportive services specifically for persons with HIV. GAO was mandated to review housing assistance for persons with HIV. This report addresses (1) the need for housing assistance for persons with HIV and the extent to which assistance reaches communities in need, (2) results achieved through HOPWA and Ryan White, and (3) federal oversight of these programs. For both programs, GAO analyzed program data on persons served and outcomes achieved as of 2012, reviewed policies, interviewed agency officials, and visited a non-generalizable sample of four geographically diverse cities that received varying amounts of both HOPWA and Ryan White funding. The extent to which persons with human immunodeficiency virus (HIV) need housing assistance is not known, in part because the Department of Housing and Urban Development's (HUD) estimate of the housing needs of persons with HIV, including those with Acquired Immune Deficiency Syndrome (AIDS), is not reliable. HUD does not require Housing Opportunities for Persons with AIDS (HOPWA) grantees to use a consistent methodology to calculate unmet need. The agency has taken steps towards developing a standard methodology, but it has not established time frames for finalizing these efforts. GAO's work on assessing data reliability indicates that data should be consistent. Because HUD does not require grantees to use selected data sources in a consistent manner, the reported information on unmet housing needs of persons with HIV are not comparable across jurisdictions and are not useful and reliable. In addition, the statutory HOPWA funding formula is based on cumulative AIDS cases since 1981, including persons who have died, rather than on current numbers of persons living with HIV (including those with AIDS). This approach has led to areas with similar numbers of living HIV cases receiving different amounts of funding. Because HOPWA funds are awarded based on cumulative AIDS cases, these funds are not being targeted as effectively or equitably as they could be. Agency data for HOPWA and the Health Resources and Services Administration's (HRSA) Ryan White program indicate most recipients of assistance obtained stable, permanent housing, but Ryan White housing data may have limitations. HRSA, within the Department of Health and Human Services, does not require Ryan White grantees to maintain current data on clients' housing status. However, it uses the data that grantees report to calculate the proportion of clients that have stable housing. HRSA is charged with tracking Ryan White clients' housing status as a part of the White House's National HIV/AIDS Strategy. Federal internal control standards state that events should be promptly recorded to maintain their relevance and value to management in controlling operations and making decisions. Because HRSA does not require grantees to maintain current data on clients' housing status, HRSA's data may be of limited usefulness in tracking the National HIV/AIDS Strategy goal of improving clients' housing status. HUD and HRSA perform oversight activities but may be missing opportunities to use data to improve performance. HUD staff conduct risk-based monitoring of HOPWA grantees, and HRSA staff have improved monitoring of Ryan White grantees. HUD and HRSA both collect performance data from their grantees and take steps to ensure that the data are complete and submitted in a timely manner. HUD uses performance data to create summaries of program performance but does not have a specific process for comparing individual grantees' year-to-year data for unmet housing need. Federal internal control standards note the importance of such comparisons. By not analyzing these trends, HUD may not be identifying and addressing reporting problems. In addition, HRSA staff responsible for monitoring Ryan White grantees do not review grantee data on housing assistance provided. Federal internal control standards state that activities need to be established to monitor performance measures. By not focusing attention on housing data, HRSA staff with monitoring responsibility are not proactively using available resources to monitor individual grantees' contributions to the National HIV/AIDS Strategy goal of improving clients' housing status. If Congress wishes HOPWA funding to be more effectively targeted, it should consider revising the funding formula to reflect the number of living persons with HIV. GAO also recommends that (1) HUD require a consistent methodology for estimating unmet housing needs and (2) both HUD and HRSA improve the reliability and use of performance data to manage their programs. HRSA agreed with GAO's recommendations. HUD agreed with the first recommendation but disagreed with the second, stating that it already assesses trends in some program data. GAO clarified that HUD should identify reporting issues by analyzing trends in its unmet housing need data.
As you know, Mr. Chairman, the decennial census is a constitutionally mandated enterprise critical to our nation. Census data are used to apportion seats and redraw Congressional districts, and to help allocate over $400 billion in federal aid to state and local governments each year. We added the 2010 Census to our list of high-risk areas in March 2008 because improvements were needed in the Bureau’s management of IT systems, the reliability of the HHCs, and the quality of the Bureau’s cost estimates. Compounding the risk was that the Bureau canceled a full dress rehearsal of the census that was scheduled in 2008, in part, because of the HHC’s performance problem, which included freeze-ups and unreliable data transmissions. Although the Bureau had planned to use the HHCs to collect data for both address canvassing and in going door to door following up with nonrespondents, the Bureau ultimately decided to use the HHCs for address canvassing and revert to collecting nonresponse follow-up data using paper. As a result of this decision, the Bureau had to redesign components of its field data collection system to accommodate the new approach, thus introducing new risks. Among other actions, in response to our findings and recommendations, the Bureau strengthened its risk management efforts, including the development of a high-risk improvement plan that described the Bureau’s strategy for managing risk and key actions to address our concerns. Still, in March 2009, in testimony before this Subcommittee, we continued to question the Bureau’s readiness. Specifically, we noted that with little more than a year remaining until Census Day, uncertainties surrounded critical operations and support systems, and the Bureau lacked sufficient policies, procedures, and trained staff to develop high-quality cost estimates. Moving forward, we said that it will be essential for the Bureau to develop plans for testing systems and procedures not included in the dress rehearsal, and for Congress to monitor the Bureau’s progress. Since 2005, we have reported on weaknesses in the Bureau’s management of its IT acquisitions, and issues continue concerning the Bureau’s IT management and testing of key 2010 Census systems. In March 2009, we reported and testified that while the Bureau took initial steps to enhance its program-wide oversight of testing activities, those steps had not been sufficient. Furthermore, while the Bureau had made progress in testing key decennial systems, critical testing activities remained to be performed before they would be ready to support the 2010 Census. At that time we recommended that the Bureau improve its oversight of the completion of testing activities for key systems. In response to our findings and recommendations, the Bureau has taken several steps to improve its management of IT for the 2010 Census. For example, the Bureau named a Decennial Census Testing Officer whose primary responsibilities include monitoring testing for decennial census activities. In order to help improve the rigor and quality of test planning and documentation, this official leads a bimonthly process to consolidate and evaluate test planning and status across all key decennial census operations, resulting in a decennial census testing overview document. With respect to system testing, progress is being made, but much testing remains to be completed as shown in the following table. The Bureau has also made progress in end-to-end testing, but substantial work remains to be completed. For example, the Bureau has completed limited end-to-end tests for nonresponse follow-up and group-quarters enumeration on the Paper-Based Operations Control System (PBOCS), a work flow management system the Bureau developed late in the census cycle when it moved from the HHCs to a paper-based approach to nonresponse follow-up and other field operations. However, Bureau officials stated that, although they were satisfied with the results of the tests, significant additional testing will be needed. For example, several critical issues were identified during these tests that will need to be resolved and retested. In addition, the test was not designed to evaluate the level of system performance needed while processing the estimated 48 million housing units that will be in the nonresponse-follow-up workload. According to the Bureau, a performance test is being designed for the first major release; however, detailed plans for this test have not yet been completed. Finally, the test was performed with experienced census employees, while the system will be used by newer, temporary employees. Given the importance of IT systems to the decennial census, it is critical that the Bureau ensure these systems are thoroughly tested. Bureau officials have repeatedly stated that the limited amount of time remaining will make completing all testing activities challenging. The Bureau faces significant challenges finalizing PBOCS. Most notably, the Bureau needs to determine the remaining detailed requirements for the system to be developed. As of early September 2009, the Bureau had established high-level requirements for its PBOCS but had not yet finalized the detailed requirements. High-level requirements describe in general terms what functions the system will accomplish, such as producing specific management reports on the progress of specific paper-based operations or checking-out and checking-in groups of census forms for shipping or processing. Detailed requirements describe more specifically what needs to be done in order to accomplish such functions. For PBOCS, such detailed requirements might include, for example, which data from which data source should be printed where on a specific management report. According to Bureau officials, in the absence of such specificity in the requirements for the 2008 dress rehearsal, contract programmers with little decennial census experience made erroneous assumptions about which data to use when preparing some quality control reports. As a result, quality assurance managers were unable to rely on the reports for tracking progress. In recognition of the serious implications that shortcomings in PBOCS would have for the conduct of the 2010 Census and to see whether there were additional steps that could be taken to mitigate the outstanding risks to successful PBOCS development and testing, in June 2009, the Bureau chartered an assessment of PBOCS, chaired by the Bureau’s chief information officer (CIO). The assessment team reported initially in late July 2009 and provided an update the following month. The review stated that the PBOCS developers had made a strong effort to involve the system stakeholders in the development process. However, the review also identified several concerns with PBOCS development. For example, the review found and we confirmed that the Bureau could improve its requirements management for PBOCS. According to the CIO, the Bureau has taken steps to address some of these findings, such as providing additional resources for testing and development; however, resolving problems found during testing before the systems need to be deployed will be a challenge. At the end of our review, the Bureau presented evidence of the steps it had taken to document and prioritize requirements. We did not assess the effectiveness of these steps. Until the Bureau completes the detailed requirements for PBOCS, it will not have reasonable assurance that PBOCS will meet the program’s needs. The Bureau is continuing to examine how improvements will be made. A successful census relies on an accurate list of all addresses where people live in the country, because it identifies all households that are to receive a census questionnaire and serves as a control mechanism for following up with households that fail to respond. If the address list is inaccurate, people can be missed, counted more than once, or included in the wrong location. Address canvassing is one of several procedures the Bureau uses to help ensure an accurate address list and, because it is based on on-site verification, it is particularly important for identifying the locations of nontraditional or “hidden” housing units such as converted attics and basements. Although these types of dwellings have always existed, the large number of foreclosures the nation has recently experienced, as well as the natural disasters that have hit the Gulf Coast and other regions, have likely increased the number of people doubling-up, living in motels, cars, tent cities, and other less conventional living arrangements. Such individuals are at greater risk of being missed in the census. The Bureau conducted address canvassing from March to July 2009. During that time, about 135,000 address listers went door to door across the country, comparing the housing units they saw on the ground to what was listed in the database of their HHCs. Depending on what they observed, listers could add, delete, or update the location of housing units. Although the projected length of the field operation ranged from nine to fourteen weeks, most early local census offices completed the effort in less than 10 weeks. Moreover, the few areas that did not finish early were delayed by unusual circumstances such as access issues created by flooding. The completion rate is a remarkable accomplishment given the HHC’s troubled history. The testing and improvements the Bureau made to the reliability of the HHCs prior to the start of address canvassing, including a final field test that was added to the Bureau’s preparations in December 2008, played a key role in the pace of the operation, but other factors, once address canvassing was launched, were important as well, including the (1) prompt resolution of problems with the HHCs as they occurred and (2) lower than expected employee turnover. With respect to the prompt resolution of problems, although the December 2008 field test indicated that the more significant problems affecting the HHCs had been resolved, various glitches continued to affect the HHCs in the first month of the operation. For example, we were informed by listers or crew leaders in 14 early local census offices that they had encountered problems with transmissions, freeze-ups, and other problems. Moreover, in 10 early local census offices we visited, listers said they had problems using the Global Positioning System function on their HHCs to precisely locate housing units. When such problems occurred, listers called their crew leaders and the Bureau’s help desk troubleshooted the problems. When the issues were more systemic in nature, such as a software issue, the Bureau was able to quickly fix them using software patches. Moreover, to obtain an early warning of trouble, the Bureau monitored key indicators of the performance of the HHCs such as the number of successful and failed HHC transmissions. This approach proved useful as Bureau quality control staff were alerted to the existence of a software problem when they noticed that the devices were taking a long time to close out completed assignment areas. The Bureau also took steps to address procedural issues. For example, in the course of our field observations, we noticed that in several locations listers were not always adhering to training for identifying hidden housing units. Specifically, listers were instructed to knock on every door and ask, “Are there any additional places in this building where people live or could live?” However, we found that listers did not always ask this question. On April 28, 2009, we discussed this issue with senior Bureau officials. The Bureau, in turn, transmitted a message to listers’ HHCs emphasizing the importance of following training and querying residents if possible. Lower than expected attrition rates and listers’ availability to work more hours than expected also contributed to the Bureau’s ability to complete the address canvassing operation ahead of schedule. For example, the Bureau had planned for 25 percent of new hires to quit before, during, or soon after training; however, the national average was 16 percent. Bureau officials said that not having to replace listers with inexperienced staff accelerated the pace of the operation. Additionally, the Bureau assumed that employees would be available 18.5 hours a week. Instead, they averaged 22.3 hours a week. The Bureau’s address list at the start of address canvassing consisted of 141.8 million housing units. Listers added around 17 million addresses and marked about 21 million for deletion because, for example, the address did not have a structure. All told, listers identified about 4.5 million duplicate addresses, 1.2 million nonresidential addresses, and about 690,000 addresses that were uninhabitable structures. Importantly, these preliminary results represent actions taken during the production phase of address canvassing and do not reflect actual changes made to the Bureau’s master address list as the actions are first subject to a quality control check and then processed by the Bureau’s Geography Division. The preliminary analysis of addresses flagged for add and delete shows that the results of the operation (prior to quality control) were generally consistent with the results of address canvassing for the 2008 dress rehearsal. Table 2 compares the add and delete actions for the two operations. According to the Bureau’s preliminary analysis, the estimated cost for address canvassing field operations was $444 million, or $88 million (25 percent) more than its initial budget of $356 million. As shown in table 3, according to the Bureau, the cost overruns were because of several factors. One such factor was that the address canvassing cost estimate was not comprehensive, which resulted in a cost increase of $41 million. The Bureau inadvertently excluded 11 million addresses identified in address file updates from the initial address canvassing workload and fiscal year 2009 budget. Further, the additional 11 million addresses increased the Bureau’s quality control workload, where the Bureau verifies certain actions taken to correct the address list. Specifically, the Bureau failed to anticipate the impact these addresses would have on the quality control workload and therefore did not revise its cost estimate accordingly. Moreover, under the Bureau’s procedures, addresses that failed quality control would need to be recanvassed, but the Bureau’s cost model did not account for the extra cost of recanvassing of any addresses. As a result, the Bureau underestimated its quality control workload by 26 million addresses which resulted in $34 million in additional costs, according to the Bureau. Bringing aboard more staff than was needed also contributed to the cost overruns. For example, according to the Bureau’s preliminary analysis, training additional staff accounted for about $7 million in additional costs. Bureau officials attributed the additional training cost to inviting additional candidates to initial training because of concerns that recruiting and hiring staff would be problematic, even though (1) the Bureau’s staffing goals already accounted for the possibility of high turnover and (2) the additional employees were not included in the cost estimate or budget. The largest field operation will be nonresponse follow-up, when the Bureau is to go door to door in an effort to collect data from households that did not mail back their census questionnaire. Over 570,000 enumerators will need to be hired for that operation. To better manage the risk of staffing difficulties while simultaneously controlling costs, several potential lessons learned can be drawn from the Bureau’s experience during address canvassing. For example, we found that the staffing authorization and guidance provided to some local census managers were unclear and did not specify that there was already a cushion in the hiring goals for local census offices to account for potential turnover. Also, basing the number of people invited to initial training on factors likely to affect worker hiring and retention, such as the local employment rate, could help the Bureau better manage costs. According to Bureau officials, they are reviewing the results from address canvassing to determine whether they need to revisit the staffing strategy for nonresponse follow-up and have already made some changes. For example, in recruiting candidates, when a local census office reaches 90 percent of its qualified applicant goal, it is to stop blanket recruiting and instead focus its efforts on areas that need more help, such as tribal lands. However, in hiring candidates, the officials pointed out that they are cautious not to underestimate resource needs for nonresponse follow-up based on address canvassing results because they face different operational challenges in that operation than for address canvassing. For example, for nonresponse follow-up, the Bureau needs to hire enumerators who can work in the evenings when people are more likely to be at home and who can effectively deal with reluctant respondents, whereas with address canvassing, there was less interaction with households and the operation could be completed during the day. Problems with accurately estimating the cost of address canvassing are indicative of long-standing weaknesses in the Bureau’s ability to develop credible and accurate cost estimates for the 2010 Census. Accurate cost estimates are essential to a successful census because they help ensure that the Bureau has adequate funds and that Congress, the administration, and the Bureau itself can have reliable information on which to base decisions. However, in our past work, we noted that the Bureau’s estimate lacked detailed documentation on data sources and significant assumptions, and was not comprehensive because it did not include all costs. Following best practices from our Cost Estimating and Assessment Guide, such as defining necessary resources and tasks, could have helped the Bureau recognize the need to update address canvassing workload and other operational assumptions, resulting in a more reliable cost estimate. Given the Bureau’s past difficulties in developing credible and accurate cost estimates, we are concerned about the reliability of the figures that were used to support the 2010 budget, especially the costs of nonresponse follow-up, which is estimated to cost $2.7 billion. We have discussed the cost estimate for nonresponse follow-up with Bureau officials, and they have said they are looking to see how foreclosures and vacant housing units might affect the nonresponse follow-up workload. In addition, Bureau officials said they will analyze address canvassing data and determine if there are any implications for future operations. Nevertheless, there still remains a great deal of uncertainty around the final cost of the 2010 Census. In part, this is because of changes made to the census design after April 2008, when the Bureau reverted to a paper- based data collection method for nonresponse follow-up in response to the performance problems with the HHCs. The uncertainty also stems from the fact that the assumptions used to develop the revised cost estimate were not tested during the 2008 dress rehearsal. According to budget documents, after the decision to return to a paper-based nonresponse follow-up, the life cycle cost estimate increased by over $3 billion dollars. Moving forward, it will be important for the Bureau to ensure the reliability of the 2020 cost estimate, and the Bureau has already taken several actions in that regard. For example, based on recommendations from our June 2008 report, the Bureau plans to train its staff on cost estimation skills, including conducting uncertainty analysis. In addition, the Bureau is developing the Decennial Budget Integration Tool (DBiT), which according to the Bureau, should consolidate budget information and enable the Bureau to better document its cost estimates. Officials said that DBiT is capturing actual fiscal year 2009 costs, which will be used to estimate the life cycle cost for the 2020 census. However, officials also said that DBiT needs further testing, and may not be fully used until the 2012 budget. To better screen its workforce of hundreds of thousands of temporary census workers, the Bureau plans to fingerprint its temporary workforce for the first time in the 2010 Census. In past censuses, temporary workers were only subject to a name background check that was completed at the time of recruitment. The Federal Bureau of Investigation (FBI) is to provide the results of a name background check when temporary workers are first recruited. At the end of the workers’ first day of training, Bureau employees who have received around 2 hours of fingerprinting instruction are to capture two sets of ink fingerprint cards. The cards are then sent to the Bureau’s National Processing Center in Jeffersonville, Indiana, to be scanned and electronically submitted to the FBI. If the results show a criminal record that makes an employee unsuitable for employment, the Bureau is to either terminate the person immediately or place the individual in nonworking status until the matter is resolved. If the first set of prints are unclassifiable, the National Processing Center is to send the FBI the second set of prints. However, fingerprinting during address canvassing was problematic. Of the over 162,000 employees hired for the operation, 22 percent—or approximately 35,700 workers—had unclassifiable prints that the FBI could not process. The FBI determined that the unclassifiable prints were generally the result of errors that occurred when the prints were first made. Factors affecting the quality of the prints included difficulty in first learning how to effectively capture the prints and the adequacy of the Bureau’s training. Further, the workspace and environment for taking fingerprints was unpredictable, and factors such as the height of the workspace on which the prints were taken could affect the legibility of the prints. Consistent with FBI guidance, the Bureau relied solely on the results of the name background check for the nearly 36,000 employees with unclassifiable prints. However, it is possible that more than 200 people with unclassifiable prints had disqualifying criminal records but still worked, and had contact with the public during address canvassing. Indeed, of the prints that could be processed, fingerprint results identified approximately 1,800 temporary workers (1.1 percent of total hires) with criminal records that name check alone failed to identify. Of the 1,800 workers with criminal records, approximately 750 (42 percent) were terminated or were further reviewed because the Bureau determined their criminal records—which included crimes such as rape, manslaughter, and child abuse—disqualified them from census employment. Projecting these percentages to the 35,700 temporary employees with unclassifiable prints, it is possible that more than 200 temporary census employees might have had criminal records that would have made them ineligible for census employment. Applying these same percentages to the approximately 600,000 people the Bureau plans to fingerprint for nonresponse follow-up, unless the problems with fingerprinting are addressed, we estimate that approximately 785 employees with unclassifiable prints could have disqualifying criminal records but still end up working for the Bureau. Aside from public safety concerns, there are cost issues as well. The FBI charged the Bureau $17.25 per person for each background check, whether or not the fingerprints were classifiable. The Bureau stated that it has taken steps to improve image quality for fingerprints captured in future operations by refining instruction manuals and providing remediation training on proper procedures. In addition, the Bureau is considering activating a feature on the National Processing Center’s scanners that can check the legibility of the image and thus prevent poor quality prints from reaching the FBI. These are steps in the right direction. As a further contingency, it might also be important for the Bureau to develop a policy for re-fingerprinting employees to the extent that both cards cannot be read. The scale of the destruction in those areas affected Hurricanes Katrina, Rita, and Ike made address canvassing in parts of Mississippi, Louisiana, and Texas, especially challenging (see fig. 1). Hurricane Katrina alone destroyed or made uninhabitable an estimated 300,000 homes. Recognizing the difficulties associated with address canvassing in these areas because of shifting and hidden populations and changes to the housing stock, the Bureau, partly in response to recommendations made in our June 2007 report, developed supplemental training materials for natural disaster areas to help listers identify addresses where people are, or may be, living when census questionnaires are distributed. For example, the materials noted the various situations listers might encounter, such as people living in trailers, homes marked for demolition, converted buses and recreational vehicles, and nonresidential space such as storage areas above restaurants. The training material also described the clues that could alert listers to the presence of non-traditional places where people are living and provided a script they should follow when interviewing residents on the possible presence of hidden housing units. Additional steps taken by the city of New Orleans also helped the Bureau overcome the challenge of canvassing neighborhoods devastated by Hurricane Katrina. As depicted in fig. 2 below, city officials replaced the street signs even in abandoned neighborhoods. This assisted listers in locating the blocks they were assigned to canvass and expedited the canvassing process in these deserted blocks. To further ensure a quality count in the hurricane affected areas, the Bureau plans to hand-deliver an estimated 1.2 million questionnaires (and simultaneously update the address list) to housing units in much of southeast Louisiana and south Mississippi that appear inhabitable, even if they do not appear on the address list updated by listers during address canvassing. Finally, the Bureau stated that it must count people where they are living on Census Day and emphasized that if a housing unit gets rebuilt and people move back, then that is where those people will be counted. However, if they are living someplace else, then they will be counted where they are living on Census Day. The Bureau has made remarkable progress in improving its overall readiness for 2010, with substantial strides being made in the management of its IT systems and other areas. That said, as I noted throughout this statement, considerable challenges and uncertainties lie ahead. While the decennial is clearly back on track, many things can happen over the next few months, and keeping the entire enterprise on plan continues to be a daunting challenge fraught with risks. Mr. Chairman and members of this Subcommittee, this concludes my statement. I would be happy to respond to any questions that you might have at this time. If you have any questions on matters discussed in this statement, please contact Robert N. Goldenkoff at (202) 512-2757 or by e-mail at goldenkoffr@gao.gov. Other key contributors to this testimony include Steven Berke, Virginia Chanley, Benjamin Crawford, Jeffrey DeMarco, Dewi Djunaidy, Vijay D’Souza, Elizabeth Fan, Ronald Fecso, Amy Higgins, Richard Hung, Kirsten Lauber, Jason Lee, Andrea Levine, Signora May, Ty Mitchell, Naomi Mosser, Catherine Myrick, Lisa Pearson, David Powner, David Reed, Jessica Thomsen, Jonathan Ticehurst, Shaunyce Wallace, Timothy Wexler, and Katherine Wulff. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The decennial census is a constitutionally-mandated activity that produces data used to apportion congressional seats, redraw congressional districts, and help allocate billions of dollars in federal assistance. In March 2008, GAO designated the 2010 Census a high-risk area in part because of information technology (IT) shortcomings. The U.S. Census Bureau (Bureau) has since strengthened its risk management efforts and made other improvements; however, in March 2009, GAO noted that a number of challenges and uncertainties remained. This testimony discusses the Bureau's readiness for 2010 and covers: (1) the delivery of key IT systems, (2) preliminary findings on the results of address canvassing and the lessons learned from that operation that can be applied to subsequent field operations, and (3) the Bureau's progress in improving its cost estimation abilities. The testimony is based on previously issued and ongoing GAO work. The Bureau continues to make noteworthy gains in mitigating risks and in keeping the headcount on-track, but a number of challenges remain. Specifically, over the last few months, the Bureau has made important strides in improving oversight of testing key IT systems. For example, the Bureau named a testing officer to monitor the testing of census-taking activities. The Bureau has also made progress in system testing, but faces tight timeframes in finalizing the paper-based operations control system (PBOCS), which will be used to manage field operations. If any significant problems are identified during the testing phases of PBOCS, there will be little time, in most cases, to resolve the problems before the system needs to be deployed. Address canvassing, an operation where temporary workers known as listers go door-to-door to verify and update address data, finished ahead of schedule, but was over budget. Based on initial Bureau data, the preliminary figure on the actual cost of address canvassing is $88 million higher than the original estimate of $356 million, an overrun of 25 percent. A key reason for the overrun is that the Bureau did not update its cost estimates to reflect the changes to the address canvassing workload. Further, the Bureau did not follow its staffing strategy and hired too many listers. The Bureau's efforts to fingerprint employees, which was required as part of a criminal background check, did not proceed smoothly, in part because of training issues. As a result, over 35,000 temporary census workers--over a fifth of the address canvassing workforce--were hired despite the fact that their fingerprints could not be processed and they were not fully screened for employment eligibility. The Bureau is refining instruction manuals and taking other steps to improve the fingerprinting process for future operations. GAO is unable to verify the accuracy of the $14.7 billion estimated cost of the 2010 Census because key details and assumptions are unavailable. However, the Bureau is taking steps to improve its cost estimation process for 2020, including training its staff in cost estimation skills. While the Bureau has taken a number of actions to mitigate risk and its overall readiness for 2010 has improved, much work remains to be done. Many things can happen over the next few months, and keeping the entire enterprise on-plan will continue to be a daunting challenge fraught with risks. High levels of public participation, and continued Bureau and congressional attention to stewardship, performance, and accountability, will be key to a successful census.
The original Title I legislation was passed in 1965, but the 1994 reauthorization of ESEA mandated fundamental changes to the Title I program. One of the key changes involved the development of state systems of standards and assessments to ensure that students served by Title I were held to the same standards of achievement as all other children. Prior to 1994, some states had already implemented assessment systems, but these tended to be norm-referenced—students’ performance was judged in relation to the performance of other students. The 1994 legislation required assessments that were criterion-based—students’ performance was to be judged against an objective standard. Every state applying for Title I funds since 1994 agreed to implement the changes described in the 1994 law and to bring its assessment systems into compliance. States are also required to develop a definition of adequate yearly progress based on the assessments to hold schools accountable for educational progress. To help states that could not meet the proposed 2001 timeline, Education had authority to grant timeline waivers and compliance agreements to states under certain conditions. In its 2001 ESEA reauthorization, Congress increased testing requirements for states as well as the consequences for not improving test scores in schools and did not eliminate any of the requirements of the 1994 legislation. As shown in table 1, the 1994 and 2001 legislative requirements for assessment and accountability concern developing standards for content and performance; measuring improvement; implementing and administering assessments, including assessing students with limited English proficiency; reporting assessment data; and applying consequences for not meeting performance goals. Almost all states employ contractors to perform services to help them meet these requirements. Among states that we interviewed, contractors included private companies, universities, nonprofit organizations, and individual consultants. These entities were hired to provide services that may include assessment development, administration, scoring, analysis, and reporting of results. Some of these entities can provide combinations of services to states, such as test development and test scoring. States are responsible for monitoring contractor performance. Congress allowed states to phase in the 1994 ESEA requirements over time, giving states until the beginning of the 2000-01 school year to fully implement them with the possibility of limited extensions. Education is responsible for determining whether or not a state is in compliance with these requirements and is authorized under ESEA, to give states more time to implement the requirements as long as states are making adequate progress toward this goal. States submit evidence to Education showing that their system for assessing students and holding schools accountable meets Title I requirements. Education has contracted with individuals with expertise in assessments and Title I to review this evidence. The experts provide Education with a report on the status of each state regarding the degree to which a state’s system for assessing students meets the requirements and deserves approval. Using this and other information, the Secretary sends each state a decision letter that summarizes the experts’ review and communicates whether a state is in full compliance, in need of a timeline waiver, or more seriously, a compliance agreement. Education may withhold funds if a state does not meet the terms of its compliance agreement. The 1994 legislation was not specific in the amount of administrative funds that could be withheld from states failing to meet negotiated timelines, but the 2001 legislation states that Education must withhold 25 percent of state administrative funds until the state meets the 1994 requirements including the terms of any timeline waivers or compliance agreements. In June 2000, we issued a report on states’ efforts to ensure compliance with key Title I requirements. At that time, we expressed concern about the number of states that were not positioned to meet the deadlines in the 1994 law. To increase compliance, we made two recommendations. We recommended that the Department of Education should (1) facilitate among states the exchange of information and best practices regarding the implementation of Title I requirements and (2) implement additional measures to improve research on the effectiveness of different services provided through Title I to improve student outcomes. Education continues to work on the implementation of these recommendations. In addition, we said that Congress should consider requiring that states’ definitions of adequate yearly progress apply to disadvantaged children, as well as to the overall student population. The 2001 legislation does require that states apply adequate yearly progress requirements and report on the results by subgroups, including students in poverty, with disabilities, and with limited English proficiency. As of March 2002, 17 states were in compliance with the 1994 Title I assessment requirements; however, 35 were not. (See table 2.) Departmental approval of timeline waivers to give states more time to reach compliance has been granted for 30 states. Education has asked five states to enter into compliance agreements that will establish the final date by which they must be in compliance before losing Title I funding. Among other requirements, states that are not in compliance have most frequently not met the specific requirements to assess all students and break out assessment data by subcategories of students. The 2001 legislation requires states to implement additional assessments through 2008, thus substantially augmenting current assessment requirements. Education has published a notice of proposed negotiated rulemaking in the Federal Register and has solicited comments from outside parties in preparation for establishing state compliance standards for the 2001 legislation. When Education determines that a state is not in compliance with the 1994 Title I assessment requirements, it may grant the state a timeline waiver for meeting those requirements. A waiver may not exceed 3 years. Education officials indicate that the agency grants waivers to states that have a history of success in implementing significant portions of their assessment systems, have a clear plan with a definite timeline for complying with the Title I requirements, and have the capacity to carry out the plan and thus meet those requirements. When a state requests a waiver, it must provide Education with a plan that includes a timeline for addressing deficiencies in the state’s assessment system. Education reviews this information to decide whether the waiver should be granted and its duration. So far, Education has granted timeline waivers to 30 states. (see table 3.) A compliance agreement is deemed necessary when Education determines that a state will not complete the implementation of its assessment system in a timely manner. According to Education officials, a state requiring a compliance agreement generally does not have a history of successful implementation, has not met a significant number of Title I requirements, and does not have a plan in place for meeting those requirements. Education recommends a compliance agreement so that a state may continue to receive Title I funds. Before Education may enter into a compliance agreement, a public hearing must be held in which the state has the burden of persuading Education that full compliance with the Title I requirements is not feasible until a future date. The state must be able to attain compliance within 3 years of the signing of the compliance agreement by the state and Education. The state then negotiates the terms of the agreement with Education. Education’s written findings and the terms of the compliance agreement are published in the Federal Register. A state that enters into a compliance agreement to address requirements of the 1994 Title I law and subsequently fails to meet the requirements of the agreement can be subject to loss of some state Title I administrative funds. Education is presently working on five compliance agreements (Alabama, Idaho, Montana, West Virginia, and the District of Columbia) and has held public hearings for each. The 2001 reauthorization of ESEA was signed into law on January 8, 2002. The act provides states not in compliance with the 1994 Title I requirements at the time of the signing of the 2001 legislation with a 90-day period that started on January 8, 2002 to negotiate changes in the dates by which they must be in compliance with the 1994 requirements. After the conclusion of this 90-day period, the legislation prohibits further extensions for compliance with the 1994 requirements. States failing to meet these negotiated timelines will be subject to loss of some of their Title I administrative funds. According to senior Education officials, this loss could be significant to states, as many use federal program administrative funds to pay the salaries of state department of education staff. A review of documents from Education shows that noncompliant states have most commonly not met two Title I requirements—assessing all students and breaking out assessment data by subcategories of students. Title I does not permit states to exempt any student subgroup from their assessments and Education’s guidance states that individual exemptions may be permitted by the states in extraordinary circumstances. Nonetheless, many states allow substantial exemptions for students with disabilities and limited English proficiency. Several states reported that they have only recently amended laws that prohibited testing of some students with limited English proficiency. Title I also requires states, local districts, and schools to report the performance of students overall and in a variety of subcategories. These categories are gender, race, ethnicity, English proficiency status, migrant status, disability status, and economic disadvantage. Many states disaggregated data for some but not all of these categories. Documents from Education show that data for the disabled, migrant, and economically disadvantaged subcategories are the most common subgroups excluded from state, district, and school reports. In addition, many states lag in other areas, such as aligning assessments to state content standards. To achieve compliance with the 2001 legislation, states will need to add new standards and increase assessment efforts, as detailed in table 1. In responding to our survey, 48 states reported that they have developed content standards in science, but only 16 reported having annual assessments for math and 18 reported annual assessments for reading in all grades 3 through 8. In addition, states will not have the 2 to 3 year timeline waivers available to them as they had when they worked to meet the 1994 requirements. New 2001 requirements listed in table 1 have deadlines that vary according to the requirement, and the Secretary of Education can give states 1 additional year from those deadlines to meet the new requirements, but only in case of a “natural disaster or a precipitous and unforeseen decline in the financial resources of the state.” Since the majority of states have not met the requirements of the 1994 law, it appears that many states may not be well-situated as they work to meet the schedule for implementing new requirements that build upon the 1994 requirements. States successful in meeting key Title I requirements attributed their success primarily to four factors. These factors were (1) the efforts of state leaders to make Title I compliance a priority; (2) coordination between staff of different agencies and levels of government; (3) obtaining buy-in from local administrators, educators, and parents; and (4) the availability of state-level expertise. Survey respondents identified inadequate funding as an obstacle to compliance. The state Title I officials we interviewed said that their states’ commitment of resources to norm- referenced assessments that conflicted with the 1994 Title I requirements contributed to this obstacle. Almost 80 percent of the respondents identified state leaders’ efforts as a factor that facilitated their meeting the 1994 Title I requirements. In every state that had attained compliance with the Title I requirements, the officials that we interviewed said that the governor, legislators, or business leaders made compliance with the Title I requirements a high-priority. States described the development of high-level committees, new state legislation, and other measures to raise the visibility and priority of this issue. For example, one governor spearheaded a plan that used commissions to develop content standards and assessments aligned with those standards. Some state officials we interviewed reported that efforts by state department of education leaders resulted in major organizational changes in the state education department. For example, according to one Title I Director, the state changed the organizational structure and reporting relationships of state offices to organize them by function rather than by funding streams and to enhance coordination; according to another Title I Director, state leaders who did not support changes necessary to achieve compliance with Title I were replaced with staff that did support the changes. In responding to our survey, over 80 percent of the Title I officials identified the ability of staff or agencies to coordinate their efforts with one another as a factor that helped them meet requirements. In our interviews, state officials cited the necessity of coordination between state and local staff working in the areas of assessment, instruction, and procurement. Two of the states we interviewed specifically noted that when the assessment office shared a physical location with the Title I office, coordination was easier and the ability to achieve compliance with Title I was enhanced. Title I and other officials we interviewed in those states that had met the 1994 Title I assessment requirements noted that they had made great efforts to obtain buy-in from other state officials, local administrators, educators, and the public. They said that efforts to ensure buy-in paved the way for changes meant to ensure compliance with the assessment and accountability requirements of the1994 legislation. Several officials we interviewed reported holding public meetings and focus groups to obtain input from parents, teachers and local administrators regarding how the state should implement Title I requirements. They also reported conducting public relations campaigns to educate the public about the importance of complying with Title I requirements for standards-based assessment. One state, for example, conducted 6 years of focus groups and hearings and conducts a conference annually to allow local education officials to gain advice from experts regarding any concerns or problems they are having in implementing Title I requirements. In responding to our survey, over 80 percent of state Title I directors identified the availability of state level expertise as a factor that facilitated their efforts to meet Title I requirements. State officials we interviewed reported that training for teachers and district personnel was often needed to apply new content standards in the classroom and to administer assessments correctly. Two states, for example, used regional centers to educate local staff on assessments and standards. Fifty percent of survey respondents identified inadequate funding as an obstacle in moving toward compliance and noncompliant states cited this problem more often than compliant ones. In our interviews, Title I and assessment officials from noncompliant states reported that progress toward compliance with Title I requirements was stalled because of investments they had made in assessment systems that predated and conflicted with the requirements of the 1994 Title I reauthorization. Respondents said that they had made substantial investments of time and money in systems of assessment that often relied upon norm-referenced assessments and did not meet the 1994 requirement for criterion-based tests. They noted that it took their states several years to change from the old system of assessment to one meeting the requirements specified by the 1994 reauthorizing legislation. According to the officials we interviewed, building support to start again on another system and obtaining the funding made it more difficult to make the necessary changes in a timely manner. In addition, one survey respondent from a very small state noted that due to the state’s size it has a small number of staff and does not have the technical expertise needed to develop a new system, thus hampering the state’s ability to meet the requirements. Most states are taking some action to ensure that Title I assessments are scored accurately, that any exemptions for students with limited English proficiency are justified, and that students are receiving appropriate accommodations when these are needed to gather an accurate assessment of their abilities. Most states hire a contractor to score Title I assessments and about two-thirds of these states monitored the scoring performed by the contractor. Some states that hire contractors have found errors in the scoring the contractors did, and in some cases, these errors have had serious negative consequences for schools and students. Most states reported taking some actions to ensure that students with limited English proficiency and disabilities received appropriate accommodations during testing. Education is redesigning its current compliance and monitoring program to better monitor states’ implementation of Title I. According to our survey results, most states (44) hire a contractor for test scoring, but 16 of these states identified no monitoring mechanism to ensure the accuracy of their contractor’s scoring and reporting. Among those states that did report one or more monitoring mechanisms, 15 reported that they monitored the contractor’s scoring by comparing a sample of original student test results to the contractor’s results. A few states also reported, in interviews with us, that they compared their most recent test scores with those from previous years and looked for significant variations that suggested potential errors in scoring. However, in our interviews, some assessment officials indicated that they use this type of monitoring rather informally. The problems identified in assessment scoring suggest that these approaches do not always provide adequate assurance of complete and accurate results. Indeed, several of the states that use contractors to score tests reported that they have had problems with errors in scoring whether or not they had monitoring measures in place. In some cases, contractors marked correct answers as incorrect and in other cases the contractors calculated the scores incorrectly. The errors were discovered by a number of individuals, including local district officials, parents, and state agency staff. These scoring errors had impacts on students, families, and school and district resources. Based on erroneous scores calculated by a contractor, one state sent thousands of children to summer school in the mistaken belief that their performance was poor enough to meet the criterion for summer intervention. In addition to disrupting families’ summer plans and potentially preventing student promotions, this may have drawn resources away from other necessary activities. In another case, based on a contractor’s erroneous scoring, a state incorrectly identified several schools as “in need of improvement,” a designation that carries with it both bad publicity and extra expense, for example, districts may have to fund the needed improvements. A few state officials that we interviewed told us that they have begun instituting processes to check the accuracy of scoring. For example, three states said that they had hired individuals who were experts in test scoring or they hired other third parties to conduct independent audits. States that were in compliance with 1994 Title I assessment requirements generally had more complete monitoring systems, including measures such as technical advisory committees to review results, conduct site visits, and follow a sample of tests through the scoring and reporting process. In contrast, several states indicated they are still relying on contractor self- monitoring to ensure accurate scoring. Although Education is obligated under the Federal Managers’ Financial Integrity Act of 1982 and the Single Audit Act to ensure that states that receive federal funds comply with statutory and regulatory requirements to monitor contractors, it currently takes limited action regarding states’ monitoring of assessment contractors. Education’s inspector general has reported deficiencies in an important vehicle for such oversight - - Education’s compliance reviews of state programs. The compliance reviews are conducted on a 4-year cycle and include an on-site visit that lasts 1 week. Specifically, the OIG cited insufficient time to conduct the reviews, lack of knowledge among Education staff about areas they were reviewing, and a lack of consistency in how the reviews were conducted. Senior Education officials told us the department is redesigning the current compliance and monitoring program used for its on-site visits to better focus on outcomes and accountability in Title I and that it is addressing the OIG’s recommendations. However, a senior Education official who is working on the redesign of the compliance reviews told us that the current draft plans did not include specific checks on state monitoring of assessment scoring. Confidence in the accuracy of test scoring is critical to acceptance of the test results’ use in assessing school performance. According to our surveys and interviews, 33 states have taken at least minimal actions to ensure any exemptions for students with limited English proficiency are justified and 41 states take actions to ensure accommodations for students with disabilities are appropriate. Most states reported that they had developed standards for districts to follow in accommodating these students so that assessments could yield accurate measures of their performance. However, states reported few actions that would ensure that these guidelines were being followed. For example, 17 states reported that they compare the number of students with limited English proficiency tested within a given year against the number for the previous year. They used this comparison as their means of verifying that the numbers of students receiving exemptions were reasonable. As the pool of students in a particular school can change substantially from year to year, this comparison has obvious limitations. Moreover, students’ status, for example with respect to English proficiency, can change from year to year. Similarly, 37 states reported using an annual comparison of the number of students with disabilities being tested as a check for appropriate accommodations. However, it is not evident how such comparisons would allow states to ascertain the appropriateness of the accommodations. Survey results and interviews did indicate that more states are taking actions to monitor accommodations for students with disabilities than for students with limited English proficiency. For example, while 25 states reported that they had standards for accommodating students with limited English proficiency, 36 had standards for accommodating students with disabilities. The state officials that we interviewed told us that this was because districts built upon steps they had taken under the Individuals with Disabilities Education Act (IDEA) to document the accommodations needed by students with disabilities. In general, states said that the districts have more experience and technical expertise for assessing and supporting students with disabilities because of working under IDEA for many years. In contrast, some states lacked consistent standards for identifying students with limited English proficiency and more states were still working to develop alternate assessments or accommodations for these students. Augmenting the 1994 requirements, the new 2001 legislation requires that states annually assess the language proficiency of students with limited English proficiency by the 2002-03 school year. States do conduct cyclical monitoring of the implementation of all their programs that might be used to assess the appropriateness of district policy and practice with regard to testing accommodations. However, in a recent review,we found that states varied dramatically in the frequency of their on-site visits. The average time between visits to districts ranged from 2 years or less (6 states) to more than 7 years (17 states). This snapshot of the states’ status with respect to the 1994 Title I requirements suggests that many states may not be well-positioned to meet the requirements added in 2001. Only 17 states were in compliance with the assessment requirements of the 1994 law in March of 2002; therefore, the majority of states will still be working on meeting the 1994 requirements as they begin work toward meeting the new requirements. In addition, despite the enhanced emphasis on assessment results, states still appear to be struggling with ensuring that assessment data are complete and correct. The 1994 and 2001 ESEA reauthorizations raised student assessments to a new level of importance. The assessments are intended to help ensure that all students, including those who have disabilities and those who have limited English proficiency are meeting challenging standards. In addition, assessment results are a key part of the mechanism for holding both schools and states accountable for improving educational performance. Thus, ensuring the completeness and accuracy of assessment data is central to measuring students’ progress and ensuring accountability. Without adequate oversight of assessment scoring, efforts to identify and improve low-performing schools could be hindered by lack of confidence in assessment results or uncertainty regarding whether particular schools have been appropriately identified for improvement. Education’s current monitoring does not include specific oversight of how states ensure the quality of scoring contractors’ work, but Education’s revision of its monitoring process provides the agency with an opportunity to help states ensure that scoring done by contractors is accurate. To enhance confidence in state assessment results, we recommend that when the Department of Education monitors state compliance with federal programs, it include checks for contractor monitoring related to Title I, Part A. Specifically, Education should include in its new compliance reviews a check on the controls states have in place to ensure proper test scoring and the effective implementation of these controls by states. We provided Education with a draft of this report for review. The Department’s official comments are printed in appendix II. In its comments, Education agreed with our recommendation. Education also provided us with technical comments that we incorporated in the report as appropriate. We are sending copies of this report to appropriate congressional committees and other interested parties. If you have any questions about this report, please contact me on (202) 512-7215 or Betty Ward-Zukerman on (202) 512-2732. Key contributors to this report were Mary Roy, G. Paul Chapman II, Laura Pan Luo, Corinna Nicolaou, and Patrick DiBattista. We conducted this review in conjunction with our partners in the Domestic Working Group. The Domestic Working Group’s objective is to allow officials in the federal, state, and local governmental audit communities to interact on a personal and informal basis on various topics of mutual concern. The group consists of 18 (6 federal, 6 state, and 6 local) top officials and is intended to complement the work of the intergovernmental audit forums and other professional associations. For this review, the Texas State Auditor's Office conducted a detailed assessment of data quality at the state and local levels in Texas, while the Department of Education's Office of Inspector General did so at the state and local levels in California and conducted additional work on control processes at the Department of Education. In Pennsylvania, the Pennsylvania Department of Auditor General conducted work at the state level and the Philadelphia Controller's Office pursued the same goal within the city of Philadelphia. To complement these efforts, GAO surveyed all states and conducted detailed interviews with several regarding their experiences in implementing major provisions of Title I. Specifically, we reviewed three key questions: (1) the status of states’ compliance with key 1994 Title I assessment requirements; (2) factors that have hindered or helped states move toward meeting the requirements; and (3) the actions states are taking to ensure that Title I assessments are scored accurately, exemptions for students with limited English proficiency are justified, and students with disabilities are accommodated during testing according to federal regulations. We obtained information on the first objective from the Department of Education. We met with Education officials and obtained updated listings of compliance throughout the audit. In addition, we reviewed state decision letters, peer reviews of state assessment systems, and reports completed or commissioned by Education’s Planning and Evaluation Service. To address the second and third questions, we used both a state survey of Title I directors and detailed interviews with state Title I officials and other state officials who played a key role in Title I compliance - often assessment officials and sometimes Special Education, program evaluation, and information technology officials. We sent the survey to all 50 state directors and to the District of Columbia and Puerto Rico. We received 50 completed surveys. We followed up with 19 states to clarify and expand on questions in the interview related to contracting for the scoring of tests. We interviewed officials from 5 states that had assessment systems approved by Education and 3 states that were still trying to attain compliance. We also interviewed two expert reviewers, Education officials with responsibility for Title I and program review, three officials at Education’s regional assistance centers, and officials at the Council of Chief of State School Officers. We coordinated our work and findings with our audit partners, who provided us with information relative to their states’ activities.
Concerned that Title I of the Elementary and Secondary Education Act (ESEA) had not significantly improving the educational achievements of children at risk, Congress mandated major changes in 1994. States were required to adopt or develop challenging curriculum content and performance standards, assessments aligned with content standards, and accountability systems to measure progress in raising student achievement. In return, states were given greater flexibility in the use of Title I and other federal funds. The No Child Left Behind Act of 2001 augments the assessment and accountability requirements that states must implement and increases the stakes for schools that fail to make adequate progress. The 1994 legislation required states to comply with the requirements by January 2001 but allowed the Department of Education to extend that deadline. Education has granted waivers to 30 states to give them more time to meet all requirements. If states fail to meet the extended timeliness, they are subject to the withholding of some Title I administrative funds. Title I directors indicated that a state's ability to meet the 1994 requirements improved when both state leaders and state agency staff made compliance a priority and coordinated with one another. Most directors said that inadequate funding hindered compliance. Many of the states reported taking action to ensure that Title I assessments were scored accurately, that any exemptions for students with limited English proficiency were justified, and students with disabilities were receiving appropriate testing accommodations. As of March 2002, 17 states had complied with the 1994 assessment requirements; 35 states had not.
Modern financial services firms use a variety of holding company structures to manage risk inherent in their businesses. The United States regulatory system that consists of primary bank supervisors, functional supervisors, and consolidated supervisors oversees these firms in part to ensure that they do not take on excessive risk that could undermine the safety and soundness of the financial system. Primary bank supervisors oversee banks according to their charters, and functional supervisors— primarily, SEC, self-regulatory organizations (SRO), and state insurance regulators—oversee entities engaged in the securities and insurance industries as appropriate. Consolidated supervisors oversee holding companies that contain subsidiaries that have primary bank or functional supervisors. They are chartered, registered, or licensed as banks, securities firms, commodity trading firms, and insurers. International bodies have provided some guidance for consolidated supervision. Many modern financial firms are organized as holding companies that may have a variety of subsidiaries. In recent years, the financial services industry has become more global, consolidated within traditional sectors, formed conglomerates across sectors, and converged in terms of institutional roles and products. The holding company structure, which allows firms to expand geographically, move into other permissible product markets, and obtain greater financial flexibility and tax benefits, has facilitated these changes. Financial services holding companies now range in size and complexity from small enterprises that own only a single bank and are being used for financial flexibility and tax purposes to large diversified businesses with hundreds of subsidiaries—including banks, broker-dealers, insurers, and commercial entities—that have centralized business functions that may be housed in the holding company. In addition, modern financial corporate structures often consist of several tiers of holding companies. To varying degrees, all financial institutions are exposed to a variety of risks that create the potential for financial loss associated with failure of a borrower or counterparty to perform on an obligation—credit risk; broad movements in financial prices—interest rates or stock prices— failure to meet obligations because of inability to liquidate assets or obtain funding—liquidity risk; inadequate information systems, operational problems, and breaches in internal controls—operational risk; negative publicity regarding an institution’s business practices and subsequent decline in customers, costly litigation, or revenue reductions— reputation risk; breaches of law or regulation that may result in heavy penalties or other risks that an insurance underwriter takes in exchange for premiums— insurance risk; and events not covered above, such as credit rating downgrades or factors beyond the control of the firm, such as major shocks in the firm’s markets—business/event risk. In addition, the industry as a whole is exposed to systemic risk, the risk that a disruption could cause widespread difficulties in the financial system as a whole. As firms have diversified, some holding companies have adopted enterprisewide risk management practices where they manage and control risks across the entire holding company rather than within subsidiaries. These firms have global risk managers who manage credit, market, liquidity, and other risks across the enterprise rather than within individual subsidiaries, such as securities, banking, or insurance businesses or subsidiaries in foreign countries. In addition, these firms generally provide services such as information technology on a firmwide basis and have firmwide compliance and internal audit functions. We have previously reported that most financial services firms are subject to federal oversight designed to limit the risks these firms take on because (1) consumers/investors do not have adequate information to impose market discipline on the institutions and (2) systemic linkages may make the financial system as a whole prone to instability. In the United States, this oversight is provided by primary bank and functional supervisors as well as by consolidated supervisors. As table 1 illustrates, in the United States a variety of federal bank supervisors oversee banks that are subsidiaries of holding companies. State bank supervisors also participate in the oversight of banks with state charters. Similarly, securities supervisors that include SEC and SROs, such as the New York Stock Exchange and NASD, oversee broker-dealer subsidiaries and state insurance supervisors oversee insurance companies and products. While each of the agencies has multiple goals, all are involved in assessing the financial solvency of the institutions they regulate. All of the primary bank supervisors use the same framework to examine banks for safety and soundness and compliance with applicable laws and regulations. Among other things, they examine whether the bank has adequate capital on the basis of its size, composition of its assets and liabilities, and its credit and market risk profile; the bank has an appropriate asset quality based on the credit risk of loans in its portfolio; the bank’s earnings trend measures up to that of its peers; the competence and integrity of the bank’s management and board of directors to manage the risks of the bank’s activities and their record of complying with banking regulations and other laws; the bank has adequate liquidity based on its deposit volatility, credit conditions, loan commitments and other contingent claims on the bank’s assets and its perceived ability to raise funds on short notice at acceptable market rates; and the bank adequately identifies and manages its exposures to changes in interest rates and, as applicable, foreign exchange rates, commodity and equity prices. Primary bank examiners rate banks in each of the areas; these ratings are usually referred to as CAMELS ratings (capital adequacy, asset quality, management ability, earnings, liquidity, and, where appropriate, sensitivity to market risk). SROs oversee certain aspects of broker-dealer activity. SEC concurrently oversees these SROs and independently examines broker-dealers. SEC considers its enforcement authority crucial for its protection of investors. Under this authority, it brings actions against broker-dealers and other securities firms and professionals for infractions such as insider trading and providing false or misleading information about securities or the companies that issue them. However, to protect investors, SEC also requires broker-dealers to maintain a level of capital that should allow the broker-dealer to satisfy the claims of its customers, other broker-dealers, and creditors in the event of potential losses from proprietary trading or operational events. SEC and the SROs examine broker-dealers to determine if they are maintaining required capital and evaluate broker- dealers’ internal controls. The central purpose of insurance regulation is to protect consumers by monitoring the solvency of insurers and their business practices. Insurance companies are supervised on a state-by-state basis, although states often follow general standards promulgated by the National Association of Insurance Commissioners (NAIC), a private voluntary association for insurance regulators. For example, insurance supervisors generally require insurance firms to prepare their quarterly and annual financial statements in a format unique to insurance known as statutory accounting principles that are maintained by NAIC. Insurance supervisors impose capital requirements on insurance companies to try to limit insurance company failures and ensure their long-run viability. In addition, all state insurance supervisors monitor insurers’ business practices and terms of insurance contracts in their states. In the United States, three agencies provide consolidated supervision—the Federal Reserve oversees bank holding companies, OTS oversees thrift holding companies, and SEC oversees certain CSEs on a consolidated basis. As table 2 shows, the number and type of institutions these agencies oversee varies. As the table shows, SEC, under its CSE program, oversees only large complex firms. These include Bear Stearns & Co., Goldman Sachs & Co., Lehman Brothers Inc., Merrill Lynch & Co. Inc., and Morgan Stanley & Co., while the Federal Reserve and OTS oversee firms that vary significantly in size and complexity. Among larger firms, the Federal Reserve oversees Bank of America Corporation, Citigroup, and JPMorgan Chase, and OTS oversees American International Group Inc., General Electric Company, General Motors Corporation, Merrill Lynch & Co. Inc., and Washington Mutual Inc. Most of the large bank holding companies that the Federal Reserve oversees are primarily in the business of banking but to a lesser extent engage in securities or other nonbank activities as well. Many of the large firms OTS oversees are engaged in commercial businesses, as well as securities and insurance. The Federal Reserve and OTS also oversee the vast majority of U.S. financial institutions that have remained relatively small and are not complex. The Federal Reserve and OTS base their consolidated supervision programs on their long-standing authority to supervise holding companies, while SEC has only recently become a consolidated supervisor. The Federal Reserve’s authority is set forth primarily in the Bank Holding Company Act of 1956, which contains the supervisory framework for holding companies that control commercial banks. OTS’s consolidated supervisory authority is set forth in the Home Owners Loan Act of 1933, as amended, which provides for the supervision of holding companies that control institutions with thrift charters (other than bank holding companies). SEC bases its authority on section 15(c)(3) of the Securities Exchange Act of 1934. Specifically, in 2004, SEC adopted the Alternative Net Capital Rule for CSEs based on its authority under that provision, which authorizes SEC to adopt rules and regulations regarding the financial responsibilities of broker-dealers that it finds necessary or appropriate in the public interest or for the protection of investors. Under the CSE rules, qualified broker-dealers can elect to be supervised by SEC on a consolidated basis. If the holding company of the broker-dealer also is a bank holding company, SEC defers to the Federal Reserve’s supervision of the holding company. At the same time that it issued the CSE rules, SEC promulgated final rules for the consolidated supervision of supervised investment bank holding companies (SIBHC) pursuant to a provision in the Gramm-Leach-Bliley Act (GLBA). The GLBA provision established a supervisory framework for SIBHCs—qualified investment bank holding companies that do not control an insured depository institution—similar to the approach prescribed in the act for the supervision of bank and thrift holding companies. As of this date, no firm has elected to be regulated under the SIBHC scheme. The Federal Reserve, SEC, and OTS vary in their missions in that the Federal Reserve and SEC have responsibilities outside of the supervision and regulation of financial institutions. The Federal Reserve is the central bank of the United States, established by Congress in 1913 to provide the nation with a safer, more flexible, and more stable monetary and financial system. It is responsible for conducting the nation’s monetary policy; protecting the credit rights of consumers; playing a major role in operating the nation’s payment system; and providing certain financial services to the U.S. government, the public, financial institutions, and foreign official institutions. The Federal Reserve consists of the Board of Governors (Board) and 12 Districts, each with a Federal Reserve Bank (District Bank). SEC is responsible for, among other things, overseeing the disclosure activities of publicly traded companies and the activities of stock markets. The three agencies engaged in consolidated supervision are financed differently. The Federal Reserve primarily is funded by income earned from U.S. government securities that it has acquired through open market operations; OTS primarily by assessments on the firms it supervises; and SEC by congressional appropriations. SEC collects fees on registrations, certain securities transactions, and other filings and reports. However, unlike the banking regulators, SEC deposits its collections in an SEC- designated account at the U.S. Treasury that is used by SEC’s congressional appropriators for, among other things, providing appropriations to SEC. International bodies in which U.S. supervisors participate have developed guidance for consolidated supervision of large, complex, internationally active financial firms or conglomerates. The Basel Committee on Banking Supervision (BCBS) does not have formal supervisory authority; rather, it provides an international forum for regular cooperation on banking supervisory matters, including the formulation of broad supervisory standards and guidelines. BCBS has recently revised its “Core Principles for Effective Banking Supervision,” which include countries’ requiring that banking groups be subject to consolidated supervision, although the definition of a banking group does not always include a top-tier holding company. These principles include a number of specific criteria that are presented in appendix II of this report. BCBS also has developed the Basel Capital Standards, which have been adopted in various forms by specific countries; a revised set of standards, Basel II, is currently under consideration for adoption in the United States. These standards require that holding companies engaged in banking meet specific risk-based capital requirements. In addition, the Joint Forum, an international group of supervisors established in 1996 under the aegis of BCBS and equivalent bodies for securities and insurance regulators to consider issues related to the supervision of financial conglomerates, has issued supervisory guidance. The guidance focuses on risks and controls and specifically directs examiners to review the organizational structure, capital level, risk management, and control environment of conglomerates. The EU promulgated rules for consolidated supervision of certain firms operating in Europe that took effect in 2005. U.S.-headquartered firms with operations in EU countries are among those affected by these rules, which, therefore, has had implications for consolidated supervision in the United States. The Financial Conglomerates Directive (FCD) requires that all financial conglomerates operating in EU countries have a consolidated supervisor. Conglomerates not headquartered in the EU must have an equivalent consolidated supervisor in their home country that has been approved by a designated supervisor from an EU member state in which the company operates. That supervision focuses on capital adequacy, intragroup transactions, risk management, and internal controls. The Federal Reserve, OTS, and SEC have all responded to the dramatic changes in the financial services industry, and now, for many of the largest, most complex financial services firms in the United States, these agencies examine risks, controls, and capital levels on a consolidated basis. Given the differences in their authorities and in the institutions that they supervise, as well as other factors, the agencies’ specific policies and procedures differ. Also, the agencies divide responsibilities for developing and implementing policies across a number of agency components. The Federal Reserve and OTS generally set policy centrally and implement it through District Banks or regional offices, respectively. At SEC, Market Regulation has primary responsibility for policy and for overseeing how CSEs manage risks, while SEC’s examination offices scrutinize more control-oriented activities. The oversight of complex firms involves multiple regulators. Finally, for their smaller or less complex firms, the Federal Reserve and OTS use abbreviated examination programs. All of the agencies have responded to the dramatic changes in the financial services industry, including dramatic growth, increased complexity in terms of the products and services firms offer, more global operations, and greater use of enterprisewide risk management. Now, for many of the largest, most complex financial services firms in the United States, the agencies focus on the firms’ risks, controls, and capital levels on a consolidated basis. However, the agencies have developed and revised their programs over different time frames and used different frameworks. The Federal Reserve, beginning in the mid-1990s, has developed a systematic risk-focused approach for large, complex banking organizations (LCBO); OTS began to move toward a more consistent, risk- focused approach for some large, complex firms in 2003; and SEC’s CSE program, implemented in 2004, is new and evolving. Both the Federal Reserve and OTS have approaches to supervision of smaller, less complex holding companies that reflect the risks of these institutions. In the mid-1990s, the Federal Reserve began to develop a systematic risk- focused approach for the supervision of LCBOs. The program focuses on those business activities posing the greatest risk to holding companies and managements’ processes for identifying, measuring, monitoring, and controlling those risks. According to the Federal Reserve, LCBOs have significant on- and off-balance sheet risk exposures, offer a broad range of products and services at the domestic and international levels, are overseen by multiple supervisors in the United States and abroad, and participate extensively in large-value payment and settlement systems. As of December 31, 2005, there were 21 LCBOs that together controlled 62 percent of all banking assets in the United States. In issuing a revised rating system in 2004, the Federal Reserve acknowledged that the firms it oversees had become even more concentrated and complex. In addition, it noted that the growing depth and sophistication of financial markets in the United States and around the world have led to a wider range of activities being undertaken by banking institutions. This new rating system has components for the bank holding company’s risk management, financial condition, and potential impact of the parent (and its nondepository subsidiaries) on the insured depository institution, as well as a composite rating of the holding company’s managerial and financial condition and potential risk to its depositories; the system also includes the supervisory ratings for the subsidiary depository institution. Generally policy changes for the consolidated supervision program are made by the Board and implemented by the 12 District Banks which are responsible for day-to-day examination activities of banks and bank holding companies. However, the distinction between policy setting and implementation blurs at the edges. Board staff may participate in exams and District Bank officials serve on committees that provide input for policy development and ensure that supervision is provided at some level of consistency across District Banks. The Federal Reserve requires that all bank holding companies with consolidated assets of $500 million or more meet risk-based capital requirements developed in accordance with the Basel Accord and has proposed, with the other bank supervisors, revised capital adequacy rules to implement Basel II for the largest bank holding companies. In addition, the Federal Reserve requires that all bank holding companies serve as a source of financial and managerial strength to their subsidiary banks. The Federal Reserve’s supervisory cycle for LCBOs generally begins with the development of a systematic risk-focused supervisory plan, follows with the implementation of that plan, and ends with a rating of the firm. The rating includes an assessment of holding companies’ risk management and controls; financial condition, including capital adequacy; and impact on insured depositories. The Federal Reserve noted that in addition to its other activities, it obtains financial information from LCBOs in a uniform format through a variety of periodic regulatory reports that other holding companies also provide. Table 3 provides detailed descriptions for each of the steps. For LCBOs, a management group, which consists of District Bank and Board officials, provides additional review of supervisory plans and examination findings. Annually, the management group chooses three or four topics for horizontal exams—coordinated supervisory reviews of a specific activity, business line, or risk management practice conducted across a group of peer institutions. Horizontal reviews are designed to (1) identify the range of practices in use in the industry, (2) evaluate the safety and soundness of specific activities across business lines or across systemically important institutions, (3) provide better insight into the Federal Reserve’s understanding of how a firm’s operations compare with a range of industry practices, and (4) consider revisions to the formulation of supervisory policy. Horizontal examination topics have included stress- testing practices at the holding company level and the banks compliance with the privacy provision in GLBA. Staff from more than one District Bank likely participate in the review. In addition, because many of the large bank holding companies have national banks, nonmember banks, or nonbank operations overseen by another governmental agency, Federal Reserve guidance instructs staff, consistent with the requirements of GLBA, to leverage information and resources from OCC, FDIC, SEC, and other agencies, as applicable. After the examinations are completed, the Federal Reserve informs firms generally on how they compare with their peers and may provide information on good practices as well. The Federal Reserve has a range of formal and informal actions it can take to enforce its regulations for holding companies. The agency’s formal enforcement powers are explicitly set forth in federal law. Federal Reserve officials noted that the law provides explicit authority for any formal actions that may be warranted and incentives for firms to address concerns promptly or through less formal enforcement actions, such as corrective action resolutions adopted by the firm’s board of directors or memorandums of understanding (MOU) entered into with the relevant District Bank. According to Federal Reserve officials, in 2006 the Federal Reserve took six formal enforcement actions against holding companies. In 2003, OTS revised its handbook for holding company supervision to reflect new guidance for its large, complex firms or conglomerates that it says relies on the international consensus (as evident in Joint Forum publications) of what constitutes appropriate consolidated oversight of conglomerates and also responds to the EU’s FCD. While the guidance is presented in OTS’s standard CORE—capital, organization, relationship, and earnings—format, it differs from OTS’s standard guidance in that it focuses on consolidated risks, internal controls, and capital adequacy rather than on a more narrow view of the holding company’s impact on subsidiary thrifts. As with the Federal Reserve, OTS headquarters officials generally set nationwide policies and programs and regional office staff conduct examinations. However, the Complex and International Organizations group (CIO), which was established in 2004 in OTS headquarters, both sets policy for holding company supervision of conglomerates and oversees examiners for three firms that must meet the FCD. CIO is developing a process that is similar in some respects to the Federal Reserve’s. First, on-site examination teams consisting of lead examiners and others who focus on specific risk areas provide continuous supervision. Second, while examiners for firms in the CIO group we spoke with had not had a formal supervisory plan in past years, these examiners are now preparing plans that focus on the coming year and, unlike the Federal Reserve, take a longer 3-year prospective as well. A CIO official said that this planning framework allows them to examine high-risk areas on an annual basis while ensuring that lower risk areas are covered at least every 3 years. The plans we reviewed were less detailed than those of the Federal Reserve; however, the official in charge of this program said that the group is looking to develop more systematic risk analyses and has reviewed those being used by the Federal Reserve and their counterparts in Europe. Although OTS’s guidance for its large, complex firms provides explicit directions on determining capital adequacy, OTS does not have specific capital requirements for holding companies. Generally, OTS requires that firms hold a “prudential” level of capital on a consolidated basis to support the risk profile of the holding company. For its most complex firms, OTS requires a detailed capital calculation that includes an assessment of capital adequacy on a groupwide basis and identification of capital that might not be available to the holding company or its other subsidiaries because it is required to be held by a specific entity for regulatory purposes. The EU’s European Financial Conglomerates Committee’s guidance to EU country supervisors on the U.S. regulatory system noted OTS’s lack of capital standards; however, the United Kingdom’s Financial Services Authority (FSA) has designated OTS as an equivalent supervisor for the two firms it has reviewed, and in February 2007, the French supervisory body, Commission Bancaire, approved OTS as an equivalent supervisor for another complex conglomerate. As noted, only three firms currently are subject to the increasingly systematic, detailed analysis of risks being implemented through the CIO program. Regional staff oversee other large, complex conglomerate thrift holding companies and use OTS’s standard CORE framework, which focuses more directly on the risks to the thrift posed by its inclusion in the holding company structure rather than an assessment of the risk management strategy of the holding company (see table 4). We also found that OTS regional examination staff were expanding their risk analyses of some large, complex holding companies, but they had not adopted the CIO program. Similar to the Federal Reserve, OTS has explicit authority to take enforcement actions against thrift holding companies that are in violation of laws and regulations. According to OTS officials, in 2005 OTS took three formal enforcement actions against holding companies. In 2004, SEC adopted its CSE program partly in response to international developments, including the need for some large U.S. securities firms to meet the FCD. However, SEC says that the program is a natural extension of activities that began as early as 1990 when, under the Market Reform Act, SEC was given supervisory responsibilities aimed at assessing the safety and soundness of securities activities at a consolidated or holding company level. Formally, SEC supervision under the CSE program consists of four components: a review of firms’ applications to be admitted to the program; a review of monthly, quarterly, and annual filings; monthly meetings with senior management at the holding company; and an examination of books and records of the holding company, the broker- dealer, and material affiliates that are not subject to supervision by a principal regulator. Under the net capital rule establishing the CSE program, the Division of Market Regulation has responsibility for administering the program. Market Regulation recommends policy changes to SEC Commissioners and, through its Office of Prudential Supervision, performs continuous supervision of the five firms that have been designated as CSEs. Each firm is overseen by three analysts, and each of these analysts oversees at least two firms. This office includes a few additional specialists as well. Although the rule did not specify a role for the Office of Compliance Inspections and Examinations (OCIE), this office, with the assistance of the Northeast Regional Office (NERO), examines firms’ controls and capital calculations. Each of these offices has designated staff positions for the CSE program but also uses staff from SEC’s broker-dealer examination program. Market Regulation generally is responsible for overseeing the financial and operational condition of CSEs, including how they manage their risks, but does not provide written detailed guidance for examiners. During the reviews of the firms’ applications for admittance to the CSE program, staff reviewed market, credit, liquidity, operational, and legal and compliance risk management, as well as the internal audit function, and continue to do so on an ongoing basis. The firms are to provide SEC with monthly, quarterly, and annual filings, such as consolidated financial statements and risk reports, substantially similar to those provided to the firm’s senior managers. Unlike the Federal Reserve and OTS that have their examiners continuously on site at some of their larger more complex firms, Market Regulation staff are not on site at the companies. However, Market Regulation staff meet at least monthly with senior risk managers and financial controllers at the holding company level to review this material and share the written results of these meetings among themselves and with the SEC Commissioners. These reports show that meetings with the firms cover a variety of subjects, such as fluctuations in firmwide and asset-specific value-at-risk, changes to risk models, and the impact of recent trends and events such as Hurricane Katrina. Market Regulation staff also review activities across firms to ensure that firms are all held to comparable standards and that staff understand industry trends. Market Regulation staff has conducted some horizontal reviews of activities such as hedge fund derivative products and event-driven lending that are similar in some ways to the Federal Reserve’s horizontal examinations. In addition, one staff member attends all monthly meetings that Market Regulation staff hold with the firms in a given month. That staff member identifies common themes and includes these in the monthly reports. OCIE generally is responsible for testing the control environments of the CSEs, focusing on compliance issues. OCIE and NERO staff followed detailed examination guidance when reviewing CSE applications but, unlike the Federal Reserve and OTS, this guidance is not publicly available. They reviewed firms’ compliance with the CSE rule, including whether unregulated material affiliates were in compliance with certain rules that had previously applied only to registered broker-dealers. OCIE and NERO staff continue to conduct exams of the holding companies, the registered broker-dealers, and unregulated material affiliates. During our review of the program, NERO completed the first examination of one of the CSEs, which included a review of the capital computations for the holding company and broker-dealer, the firm’s internal controls around managing certain risks, and internal audit. As a condition of CSE status, CSEs agree to compute a capital adequacy measure at the holding company in accordance with the new Basel II standards, and OCIE and NERO validated the firms’ calculations as part of their reviews of firms’ CSE applications. The U.S. bank supervisory agencies have proposed rules to implement Basel II standards for the largest, most complex banking organizations, and SEC officials said they will continue to monitor these developments and will adopt rules that are largely consistent with the banking agencies’ final rules implementing the Basel II standards. According to Market Regulation staff, CSEs’ use of the Basel II capital standards should allow for greater comparability between CSEs’ financial position and that of other securities firms and banking institutions. As part of their supervisory activities, Market Regulation staff review the models or other methodologies firms used to calculate capital allowances for certain types of risks. While the CSEs’ broker-dealers are also required to compute capital according to Basel standards, these broker-dealers are required to maintain certain capital measures above minimum levels. SEC staff also noted that CSEs are required to have sufficient liquidity so that capital would be available to any entity within the holding company if it were needed. Unlike the bank regulatory agencies, SEC does not have a range of enforcement actions that it can take for violations of the CSE regulations because participation in the CSE program is voluntary. That is, a violation of the CSE regulations can disqualify a broker-dealer from the benefits of CSE status without resulting in a violation of SEC regulations or laws that could lead to an enforcement action. SEC staff noted, however, that the prospect of not being qualified to operate as a CSE served as an effective incentive for complying with CSE requirements. Large firms generally contain a number of subsidiaries that are overseen by primary bank and functional supervisors in the United States as well as by supervisors in other countries; however, in some cases, the holding company’s supervisor may also be the primary bank or functional supervisor for subsidiaries in these holding companies. Figure 1 illustrates this regulatory complexity for a hypothetical financial holding company. A hypothetical thrift holding company and CSE would differ in that it would not have national or state member bank subsidiaries and potentially could have commercial subsidiaries. GLBA instructed the Federal Reserve, SEC, and OTS, in their roles as consolidated supervisors, to generally rely on primary bank and functional supervisors for information about regulated subsidiaries of the holding company. While the Federal Reserve is the primary federal bank supervisor for the lead bank in some bank holding companies, OCC and FDIC are more often the primary bank supervisor for the lead banks in these holding companies. OCC, because of the growth in the national banking system over the past 10 years, is now most likely to be the supervisor of the lead banks that are owned by bank holding companies in the Federal Reserves’ LCBO program. In examining these banks, OCC uses a systematic, risk- focused process similar to that of the Federal Reserve. Specifically, OCC’s process begins with a risk analysis that drives the examination process over the course of the examination cycle. According to OCC’s handbook, in assessing the bank’s condition examiners must consider not only risks in the bank’s own activities but also risks of activities engaged in by nonbanking subsidiaries and affiliates in the same holding company. FDIC is the primary federal supervisor of the lead bank in some larger bank holding companies and of most of the banks in smaller holding companies. In addition, as part of its deposit insurance role, FDIC officials told us that they have a continuous on-site presence at six of the largest LCBOs where OCC is the primary bank supervisor of the lead bank and the Federal Reserve is the consolidated supervisor. Larger bank holding companies also include a number of other regulated subsidiaries, including broker-dealers and thrifts. Except when a thrift is in a bank holding company, OTS serves as the supervisor for both the thrift and the thrift holding company. While most of these firms are in the business of banking, as table 5 shows, OTS also oversees a number of complex holding companies that are primarily in businesses other than banking, and some of these are in regulated industries, especially insurance. In addition, a number of thrift holding companies contain industrial loan companies (ILC), state-chartered institutions overseen by FDIC, and some have broker-dealers as well. OTS oversees a large number of firms where insurance is the primary business of the firm and thus shares some responsibilities with state insurance supervisors that have adopted their own holding company framework. In addition, OTS and FDIC share responsibilities when thrift holding companies include ILCs. Generally, OTS guidance refers to protecting thrifts in thrift holding companies rather than more broadly to the protection of insured depositories. However, thrifts and ILCs in the same thrift holding company may face similar threats to their safety and soundness. As the consolidated supervisor of CSEs, SEC oversees large, complex entities that include insured depositories that have FDIC or OTS as their primary federal supervisor. Those CSEs that have thrifts are also supervised at the consolidated level by OTS. SEC’s consolidated supervisory activities focus on the financial and operational condition of the holding company and, in particular, activities conducted in unregulated material affiliates that may pose risks to the group. SEC staff noted that they generally rely on the primary bank supervisor with respect to examination of insured depositories. In recent years, the Federal Reserve has limited the resources it uses to oversee the 4,325 small shell bank holding companies (i.e., companies that are noncomplex with assets of less than $1 billion) because it perceives that those entities pose few risks to the insured depositories they own. The Board has adopted a special supervisory program for these companies that includes off-site monitoring and relies heavily on primary federal supervisors’ bank examinations. For these companies, the Federal Reserve assigns only risk and composite ratings, which generally derive from primary bank supervisors’ examinations. Also, in addition to the primary bank supervisors’ examinations, Federal Reserve examiners review a set of computer surveillance screens that include the small shells’ financial information and performance, primarily to determine if the firms need more in-depth reviews. Federal Reserve staff told us that they spend a limited amount of time on small shell holding company inspections. For example, a Board official said they spend on average about 2 to 2.5 hours annually on each small shell bank holding company. According to Federal Reserve guidance, the only documentation required for small shell ratings where no material outstanding company or consolidated issues are otherwise indicated are bank examination reports and a copy of the letter transmitting the ratings to the company. Similarly, OTS uses an abbreviated version of its CORE program for its low-risk and noncomplex or Category I firms, which make up 401 of the 476 holding companies OTS oversees. Once examiners determine that the holding company is a shell, they are directed to the abbreviated program, which differs from the full CORE in that it requires less detailed information in each of the four CORE areas. For example, the abbreviated CORE does not require that examiners calculate leverage and debt-to- total-asset ratios in the capital component of the examination, while these are required in the full CORE program. However, the handbook advises examiners to refer to the full CORE program for more detailed steps whenever they feel it is warranted. In addition, the handbook advises examiners to consider the specific issues that relate to certain holding company populations, such as those containing insurance firms. At one regional office, managers told us that examinations of shell holding companies take 5 to 10 days; however, because the holding company examination is conducted concurrently with the thrift examination, OTS cannot determine the exact number of hours spent reviewing the holding company. An OTS official noted that for shell holding companies, the difference in examiners’ activities between holding company and thrift examinations is largely a matter of perspective rather than a difference in what examiners review. In recent decades, the environment in which the financial services industry operates, and the industry itself, have undergone dramatic changes that include globalization, consolidation within traditional sectors, conglomeration across sectors, and convergence of institutional roles and products. The industry now is dominated by a relatively small number of large, complex, and diversified financial services firms, and these firms generally manage their risks on an enterprisewide or consolidated basis. Consolidated supervision provides the basis for supervisory oversight of this risk management, but managing consolidated supervision programs in an efficient and effective manner presents challenges to the supervisory agencies. We found that the Federal Reserve, OTS, and SEC were providing supervision consistent with international standards for comprehensive, consolidated supervision for many of the largest, most complex financial services firms in the United States. While the agencies have articulated anticipated benefits or broad strategic goals for their supervision programs in testimony and other documents, the objectives for their consolidated supervision programs are not always clearly defined or distinguished from the objectives for their primary supervision programs. Without more specific program objectives, activities linked to these objectives, and performance measures identified to assess the extent to which these objectives are achieved, the agencies have a more difficult task of ensuring efficient and effective oversight. In particular, with the financial services industry’s increased concentration and convergence in product offerings, paired with a regulatory structure that includes multiple agencies, it is more difficult to ensure that the agencies are providing oversight that is not duplicative and is consistent with that provided by primary, functional, or other consolidated supervisors. As a result, the agencies could better ensure that consolidated supervision was being provided efficiently, with the minimal regulatory burden consistent with maintaining safety and soundness, by more clearly articulating the objectives of their consolidated supervision programs, developing and tracking performance measures that are specific to the programs, and improving supervisory guidance. The environment in which the financial services industry operates, and the industry itself, have undergone dramatic changes. Financial services firms have greater capacity and increased regulatory freedom to cross state and national borders, and technological advances have also lessened the importance of geography. Increasingly, the industry is dominated by a relatively small number of large, complex conglomerates that operate in more than one of the traditional sectors of the industry. These conglomerates generally manage their risks on an enterprisewide, or consolidated, basis. Generally, the greater ability of firms to diversify into new geographic and product markets would be expected to reduce risk, with new products and risk management strategies providing new tools to manage risk. Because of linkages between markets, products, and the way risks interact, however, the net result of the changes on an individual institution or the financial system cannot be definitively predicted. Consolidated supervision provides a basis for the supervisory agencies to oversee the way in which financial services firms manage risks and to do so on the same basis that many firms’ manage their risk. While primary bank and functional supervisors retain responsibility for the supervision of regulated banks, broker-dealers, or other entities, the consolidated supervisor’s approach can encompass a broader, more comprehensive assessment of risks and risk management at the consolidated level. The international consensus on standards or “best practices” for supervising conglomerates that include banks includes the review of risks and controls at the consolidated level, capital requirements at the consolidated level, and the authority to take enforcement actions against the holding company. As described above, we found that the Federal Reserve generally met these standards for its LCBO firms. OTS meets these standards for those firms overseen by CIO. For other firms that might be considered conglomerates, OTS does a more limited review of the risk posed to insured thrifts by activities outside the thrift and does not require that holding companies meet specific capital standards. Officials at both the Federal Reserve and OTS emphasized that the agencies’ authority to examine, obtain reports from, establish capital requirements for, and take enforcement actions against the holding company was separate from the authority that primary bank supervisors have. A full assessment of SEC’s CSE program is difficult given the newness of the program; however, it appears that for the CSE firms dominated by broker-dealers, SEC is monitoring risks and controls on a consolidated basis and requires that CSEs meet risk-based capital standards at the holding company level. However, with regard to SEC’s ability to take enforcement actions at the holding company level. SEC staff acknowledged that SEC does not have the same ability, under the CSE program, to take enforcement actions as the Federal Reserve or OTS. Nonetheless, they noted that the potential removal of a firm’s exemption from the net capital rule and notification of EU regulators that a firm was no longer operating under the CSE program would serve as effective deterrents. SEC is also authorized to impose additional supervisory conditions or increase certain multiplication factors used by the CSE in its capital computation. Management literature on internal controls, enterprisewide risk management, and government accountability suggest that to achieve accountability and efficiency requires that agencies clearly state program objectives, link their activities to those objectives, and measure performance relative to those objectives. This literature also recognizes the increased importance of these management activities in the face of substantial change in the external environment or in the face of the adoption of new “products” internally. When applied to the consolidated supervision programs at the Federal Reserve, OTS, and SEC, clearly defined objectives of consolidated supervision programs, agency activities of these programs linked to those objectives, and performance measures to determine how well the programs are operating are the management approaches that would contribute to the desired accountability and efficiency for the programs. The importance of these management activities is heightened because all three agencies face substantial changes in the external environment, including rapid growth in the financial sector, greater consolidation of firms leading to larger, more complex firms, and greater linkages among financial sectors and markets. In addition, the Federal Reserve and OTS have made substantial changes in their consolidated supervisory programs—particularly with the CIO program at OTS—and SEC has adopted a program that for the first time has staff providing formal prudential oversight at the consolidated level. Adopting sound management and control activities will help ensure that agencies are accountable for exercising the authority for their consolidated supervision programs and achieving the objectives of consolidated supervision, in ways that are effective and efficient. As a result, the regulatory burden would be as low as possible, consistent with maintaining safety and soundness of financial institutions and markets. The agencies face challenges in devising performance measures for consolidated supervision, including rapid changes in the industry. U.S. financial institutions and their competitors increasingly operate worldwide and engage in a number of businesses. Consequently, the global financial system is highly integrated and ensuring financial stability is even more important than in the past. Developing sound measures in such an environment can be difficult, and it is a challenge for agencies to distinguish how much of their work contributes to financial stability, in contrast to other goals such as protecting insured depositories. Further, these objectives are concepts that are not easy to measure. Development and use of appropriate performance measures, however, are critical to efficiently managing the risks that the agencies have in their consolidated supervision programs. Generally we found that the three agencies stated goals for all of their supervision programs broadly or that specific objectives for consolidated supervision were the same as those for their primary supervision programs. As a result, the contributions consolidated supervision programs make to the safety and soundness of financial institutions and markets could not be assessed separately from other agency programs. Clearer objectives specific to the consolidated supervision programs would facilitate linking program activities to those objectives and the authority that the agencies have to conduct consolidated supervision. In addition, clear program objectives would facilitate the development of specific performance measures to measure the contribution of these programs to those objectives as well as broader agency goals. Agencies’ strategic and performance plans sometimes contain objectives for important programs. In its strategic plan, the Federal Reserve identifies objectives for all of its supervision programs: promoting a safe, sound, competitive, and accessible banking system and stable financial markets. However, the only discussion specific to consolidated supervision in the Federal Reserve’s strategic plan relates to how the program complements its central bank functions by providing the Federal Reserve with important knowledge, expertise, relationships, and authority. In other statements, Federal Reserve officials have identified a number of potential benefits of consolidated supervision that reflect the changed environment. The then-Chairman of the Federal Reserve Board testified before Congress in 1997 that the knowledge of the financial strength and risk inherent in a consolidated holding company can be critical to protecting an insured subsidiary bank and resolving problems once they arise. In 2006, he noted further that consolidated supervision provides a number of benefits, including protection for insured banks within holding companies, protection for the federal safety net that supports those banks, aiding the detection and prevention of financial crises, and, thus, mitigating the potential for systemic risk in the financial system. In congressional testimony delivered in 2006, a Board official noted that the goals of consolidated supervision are to understand the financial and managerial strengths and risks within the consolidated organization as a whole and to give the Federal Reserve the ability to address significant deficiencies before they pose a danger to the organization’s insured banks and the federal safety net. An official at the New York District Bank identified the goals of consolidated supervision as protecting the safety and soundness of depository institutions in the holding company, promoting the health of the holding company itself, and mitigating systemic risk. In its Bank Holding Company Supervision Manual, the Federal Reserve says that the inspection process is intended to increase the flow of information to the Federal Reserve System concerning the soundness of financial and bank holding companies. The manual goes on to explain how the purpose of bank holding company supervision has evolved since the passage of the Bank Holding Company Act in 1956, whose primary objective was to ensure that bank holding companies did not become engaged in nonfinancial activities. According to the manual, an inspection is to be conducted to 1. inform the Board of the nature of the operations and financial condition of each bank holding company and its subsidiaries, including— a. the financial and operational risks within the holding company system that may pose a threat to the safety and soundness of any depository institution subsidiary of such bank holding company, and b. the systems for monitoring and controlling such financial and 2. monitor compliance by any entity with the provisions of the Bank Holding Company Act or any other federal law that the Board has specific jurisdiction to enforce against the entity, and to monitor compliance with any provisions of federal law governing transactions and relationships between any depository institution subsidiary of a bank holding company and its affiliates. The Federal Reserve also noted that the objectives of consolidated supervision are discussed in its supervisory guidance on the Framework for Financial Holding Company supervision introduced after GLBA. In the guidance, the Federal Reserve says that the objective of overseeing financial holding companies (particularly those engaged in a broad range of financial activities) is to evaluate, on a consolidated or groupwide basis, the significant risks that exist in a diversified holding company in order to assess how these risks might affect the safety and soundness of depository institution subsidiaries. The Federal Reserve has also developed a quality management program to evaluate its supervision programs overall. Board officials told us that each of the District Banks has established a quality management department that include quality planning, control, and improvement. As part of its quality management program, the Board evaluates and reports on District Banks’ supervision function in its operations reviews across the major supervision and support functions. According to a Board document, each review assesses how well the Reserve Bank carries out its supervisory responsibilities, focusing not only on the effectiveness and efficiency of individual functional areas but also on how well the Officer in Charge of Supervision organizes and allocates departmental resources, and facilitates integration among those resources. However, in the three operations review reports we reviewed, the performance of consolidated supervisory activities was not assessed independently from the performance of other supervisory activities. Not clearly establishing specific objectives for the consolidated supervision, however, potentially lessens the Federal Reserve’s ability to ensure that its consolidated supervision program provides comprehensive and consistent oversight with minimal regulatory burden. The Federal Reserve has authority for holding company supervision distinct from that for supervision of the insured depository itself. Specific objectives and performance measures would enhance the Federal Reserve’s ability to ensure its accountability and the efficiency of its consolidated supervisory activities. OTS consistently identifies the protection of insured depositories as the objective of consolidated supervision. However, like the Federal Reserve, OTS generally does not distinguish between the objectives for holding company supervision and those for primary thrift supervision. In addition, OTS’s activities often vary significantly across firms, depending in part on the risk and complexity of the firms. While the varying activities largely reflect the differences among the institutions, a clear link between these activities and the objectives of its consolidated supervision program would enhance OTS’s ability to provide effective and consistent oversight with minimal regulatory burden. OTS identifies several strategic goals in its strategic plan, placing particular emphasis on achieving a safe and sound thrift industry, and its Holding Companies Handbook identifies protection of insured thrifts as an objective of holding company supervision; however, these documents distinguish the objectives of the holding company supervision program from those of primary thrift supervision in only one area. The strategic plan says that one objective of OTS’s cross-border discussions is to receive additional equivalency determinations under EU directives, including the FCD, and its handbook focuses on international standards in its discussion of changes in its conglomerate oversight. In its strategic plan, OTS has five performance measures for supervision, including the percentage of thrifts that are well-capitalized and the percentage of safety and soundness examinations started as scheduled, but these largely relate directly to OTS’s authority as a primary bank supervisor rather than as a holding company supervisor. Because OTS is almost always both the lead bank supervisor and the holding company supervisor for the holding companies it supervises, accountability for its supervision of thrift institutions is clear. However, for those thrift holding companies whose primary business activities are not banking, accountability for parts of the institution may still not be clear. Further, whether an agency is providing consistent and efficient oversight with minimal regulatory burden for all firms is still at issue. For firms overseen by CIO, OTS devotes substantial resources to the oversight of risk and controls consolidated at the highest financial holding company level, and assesses capital at that level. However, for some other firms that had some similar characteristics to the CIO-supervised conglomerates, OTS uses relatively fewer resources in the oversight of these firms at the holding company level. For these firms, consistent with its standard CORE program, OTS looks to see that the holding company is not relying on the thrift to pay off debt or expenses and then limits its oversight to that part of the firm that might directly place the thrift at risk. Similarly, SEC identifies a number of objectives and performance measures for the agency in its strategic plan, annual performance reports, and annual budget documents. However, none of these is specific to the consolidated supervision program. Instead, these documents provide goals and performance measures for other areas such as enforcement. Enforcing compliance with federal securities laws is one of SEC’s strategic goals, and it measures performance in that area by reporting the number of enforcement cases successfully resolved in its 2005 Performance and Accountability Report. The only mention of the new CSE program in these documents is a listing as a “milestone” for Market Regulation in SEC’s 2004 Performance and Accountability Report. SEC 2006 and 2007 budget requests note that OCIE will examine CSEs under the strategic goal of enforcing compliance with federal securities laws. The 2006 budget request also includes the need to modify and interpret the rules for CSEs to maintain consistency with the Basel Standards, in light of amendments to the Basel Capital Accord, to meet the goal of sustaining an effective and flexible regulatory environment. On the Web site created by Market Regulation in June 2006, SEC says that the aim of the CSE program is to reduce the likelihood that weakness in the holding company or an unregulated affiliate endangers a regulated entity or the broader financial system. In addition, SEC officials have said that the purpose of the program was to provide consolidated oversight for firms required to meet the EU’s FCD. However, CSE oversight activities are not always linked to these aims and the extent to which these activities contribute to the aims is not measured. SEC officials have told us they have developed a draft that would establish program objectives, link activities to these objectives, and establish criteria for assessing the performance of the CSE program. Because the U.S. regulatory structure assigns responsibility for financial supervision to multiple agencies, and a single firm may be subject to consolidated and primary or functional supervision by different agencies, not having objectives and performance measures for consolidated supervision programs increases the difficulty of ensuring effective, efficient, and consistent supervision with minimal regulatory burden and ensuring that each agency is appropriately accountable for its activities. The potential for duplication was demonstrated in three financial holding companies where we discussed Federal Reserve oversight with Federal Reserve and OCC examiners and with bank officials. Based on our interviews with OCC examiners, we noted some duplication in Federal Reserve and OCC activities, despite efforts to coordinate supervision by the two agencies. In particular, since these institutions manage some risks on an enterprisewide basis, OCC needed to assess consolidated risk management or other activities outside the national bank to assess the banks’ risks. Some OCC officials said that the consolidated supervisor structure created by GLBA was primarily designed for bank holding companies with insurance subsidiaries, but this structure is not prevalent. The primary value of consolidated supervision, they said, is to prevent gaps in supervision, but the benefit for firms that hold primarily bank assets is unclear. Federal Reserve officials, on the other hand, noted that because OCC is a bank supervisor, and not a consolidated supervisor, it does not have the same authority as the Federal Reserve to conduct examinations of, obtain reports from, establish capital requirements for, or take enforcement action against a bank holding company or its nonbank subsidiaries. With more clearly articulated objectives for consolidated supervision that distinguish this authority from the primary supervisor’s authority, linking consolidated supervisory activities to those goals and measuring performance would clarify accountability and facilitate greater reliance by each agency on the other’s work, lessening regulatory burden. According to officials of the Federal Reserve Board, it takes a number of actions to ensure that the large banking organizations they oversee are treated similarly in its consolidated supervision program. These include a review of LCBO supervisory plans and other elements of the supervisory process as well as some centralized staffing. However, we found that because of the autonomy of the District banks and the lack of detailed guidance, the four District Banks in our study differed in the ways they identified examination or supervisory findings, prioritized them, and communicated these findings to firm management. For example, the Federal Reserve Bank of Atlanta more clearly defines different types of findings, provides criteria to examiners for determining and prioritizing findings, and uses this framework to communicate findings to firm management. At some other District Banks we visited, examiners did not provide us with explicit criteria for determining and prioritizing findings. As a result, it is more difficult to ensure that bank holding companies operating in different Federal Reserve districts are subject to consistent oversight and receive consistent supervisory feedback and guidance. To mitigate this potential for inconsistency, as we noted above, for large, complex institutions, committees such as the LCBO management group review supervisory findings. In addition, a Board official said that the Federal Reserve was considering implementing Atlanta’s framework across the system. Without objectives and performance measures specific to the consolidated supervision program, however, the Federal Reserve is less able to gauge the value of the Federal Reserve Bank of Atlanta’s more specific guidance to its examination staff. In part, because OTS oversees a diverse set of firms and has been changing some of its consolidated supervisory activities, consistency is a difficult challenge. An OTS official told us that OTS created the CIO in its headquarters to promote more systematic and consistent supervision for certain holding companies. In addition, OTS has issued guidance to help standardize policies and procedures related to providing continuous supervision. However, the criteria are not clear for determining whether a firm is overseen by CIO with continuous comprehensive consolidated supervision or remains in the regional group where it receives more limited oversight under the CORE program. In a speech in November 2006, OTS’s Director identified seven internationally active conglomerates OTS oversees at the holding company level. Of these, three are overseen by CIO, one receives oversight under the standard CORE program, and two others are overseen regionally but are receiving greater scrutiny than in the past. The three firms receiving comprehensive consolidated oversight by CIO are the firms that have designated OTS as their consolidated supervisor for meeting the EU equivalency requirements, while three of the others have opted to become CSEs. While the small size of SEC’s CSE program limits opportunities for treating firms differently, the lack of more complete written guidance and the decision to keep guidance confidential limit the ability of industry participants, analysts, and policymakers to determine whether firms are being treated consistently. In addition, Market Regulation staff said that more complete written guidance would reduce the risks of inconsistency should staff turnover occur. We also found that SEC’s lack of program objectives, performance measures, and written public guidance led to firms’ receiving inconsistent feedback from SEC’s divisions and offices. According to the CSEs and application examinations we reviewed, OCIE conducted highly detailed audits that resulted in many findings related to the firms’ documentation of compliance with rules and requirements, while Market Regulation looked broadly at the risk management of the firm. OCIE shared its findings with the firms, but Market Regulation determined that many of them did not meet its criteria for materiality and did not include them in its summary memorandums to the SEC Commissioners recommending approval of the applications. However, either a full or summary OCIE examination report was included as an appendix to these memorandums. Market Regulation staff said they drew on their own knowledge in deciding which findings were material and explained that a finding is material when the issue threatens the viability of the holding company. Further, Market Regulation staff told us that because they rely on managements’ openness in their ongoing reviews of CSE’s risk management, they do not always share supervisory results with OCIE staff. Market Regulation and OCIE staff stated that they are working on an agreement to facilitate communication between the offices. Finally, even if each agency provided consistent treatment and feedback to firms, there would be no assurance that consistent consolidated supervision would be provided across the agencies. We have noted before that, over time, firms in different sectors increasingly face similar risks and compete to meet similar customer needs. Thus, competitive imbalances could be created by different regulatory regimes, including holding company supervision, both here and abroad. Providing consistent efficient, effective oversight of individual financial institutions has become more difficult as institutions increasingly manage their more complex operations on an enterprisewide basis, often under the oversight of multiple federal financial supervisors. And providing efficient and effective oversight across the financial sector has become more challenging as institutions in different sectors and countries increasingly take on similar risks that may pose issues for a broad swath of the developed world’s financial institutions in a crisis. The industry’s increased concentration and convergence in product offerings, paired with a regulatory structure with multiple agencies, means that different large financial services firms, offering similar products, may be subject to supervision by different agencies. This leads to risks that the agencies may provide inconsistent supervision and regulation not warranted by differences in the regulated institutions. Supervisors in different agencies engaged in the oversight of a single institution take some steps to share information, avoid duplication, and jointly conduct some examination activities. However, these agencies did not consistently and systematically collaborate in these efforts, thus limiting the efficiency and effectiveness of consolidated supervision. For the three agencies engaged in consolidated supervision, changes in the firms they oversee have led to the firms facing similar risks and competing with each other across industry segments. As a result, it is essential for consolidated supervisors to systematically collaborate so that competitive imbalances are not created. In a system that is characterized by multiple supervisory agencies providing supervision for a single holding company and its subsidiaries as well as several agencies providing consolidated supervision for firms that provide similar services, collaboration among the supervisory agencies is essential for ensuring that the supervision is effective, efficient, and consistent. Through a review of government programs and the literature on effective collaboration, we have identified some key collaborative elements, which are listed in table 6. These elements stress the need to ensure, to the extent possible, that the agencies are working toward a common goal, that they minimize resources expended by leveraging resources and establishing compatible policies and procedures, and that they establish accountability for various aspects of these programs and for their efforts to collaborate. We have noted in our previous work that running throughout these elements are a number of factors, including leadership, trust, and organizational culture, that are necessary for a collaborative working relationship. We have also noted that agencies may encounter a range of barriers when they attempt to collaborate, including missions that are not mutually reinforcing, or may conflict, and agencies’ concerns about protecting jurisdiction over missions and control over resources. As we have noted in the past, the U.S. financial regulatory agencies meet in a number of venues to improve coordination. These venues include the President’s Working Group, the Federal Financial Institutions Examination Council, and the Financial and Banking Information Infrastructure Committee. In addition, the agencies told us they have frequent informal contact with each other. These contacts address several of the key elements of collaboration identified above, but opportunities remain to enhance collaboration in response to the changes in the financial services industry. These opportunities exist both for agencies that could collaborate in oversight of individual firms where agencies share supervisory responsibility as well as for collaboration among the consolidated supervisors to ensure consistent approaches to common risks. Enterprisewide risk management in large financial firms has complicated the task of regulating them, since agency jurisdiction is defined by legal entities. When an agency oversees both the ultimate holding company and its major bank or broker-dealer subsidiary, examination activities tend to be well-integrated. When consolidated and primary bank or functional supervisors of a firm’s major subsidiaries are from different agencies, they take some actions to work together and share information. However, we found instances of duplication and regulatory gaps that could be minimized through more systematic collaboration. Large, complex firms are increasingly managing themselves on an enterprisewide basis, further blurring the distinctions between regulated subsidiaries and their holding companies. Many of the banking and securities firms included in our review were managing by business lines that cut across legal entities, especially those institutions engaged primarily in banking or securities. At least three of the companies in our review primarily engaged in banking were simplifying their corporate structures, either by reducing the number of bank charters or bringing activities that had been outside an insured depository into the depository or its subsidiaries. Some of these entities had been unregulated by a primary federal bank or functional supervisor and thus had been the primary responsibility of the holding company supervisor or were regulated by a primary bank supervisor different from the supervisor overseeing the lead bank. Finally, we found that several firms that are CSEs, thrift holding companies, or both were conducting extensive banking operations out of a structure that includes an ILC and a thrift and that these entities, which are overseen by different primary bank supervisors, might not be receiving similar oversight from a holding company perspective. As a result of changes in corporate structures and management practices, there are increasing opportunities for collaboration among supervisors with safety and soundness objectives at the subsidiary level and holding company supervisors. For example, primary bank and functional supervisors involved in safety and soundness supervision need to review the organizational structure of the holding company and have to evaluate increasingly centralized risk management activities and the controls around those activities as they may apply to the regulated subsidiary, but the consolidated supervisor is responsible for understanding the organizational structure and monitoring risks and controls at the holding across the entire organization. When the large enterprises we reviewed had the same agency overseeing their ultimate holding company and its lead bank (or its broker-dealer, in the case of CSEs), supervisory activities tended to be well-integrated. For the financial holding company that was dominated by a state member bank and the thrift holding company dominated by a federal thrift institution, we found that the oversight of the dominant financial subsidiary and the ultimate holding company were conducted jointly with the same examination team, a single planning document, and the same timeline. In the case of the CSEs dominated by a broker-dealer, SEC supervises both the holding company and the broker-dealer; NERO completed targeted examinations of one firm in 2006 on an integrated basis. The relationship between the consolidated supervisor and other agencies that serve as primary or functional supervisors for subsidiaries is governed by law, which does provide for some information exchange among the agencies. Under the regulatory structure established by GLBA, the Federal Reserve and OTS are to rely on the primary supervisors of bank subsidiaries in holding companies (the appropriate federal and state supervisory authorities) and the appropriate supervisors of nonbank subsidiaries that either are functionally regulated or are determined by the consolidated regulator to be comprehensively supervised by a federal or state authority. Consistent with this scheme, GLBA limits the circumstances under which the Federal Reserve Board and OTS may exercise their examination and monitoring authorities with respect to functionally regulated subsidiaries and depository institutions that are not subject to primary supervision by the Board or OTS. GLBA also provides that the consolidated supervisor is to rely on reports that holding companies and their subsidiaries are required to submit to other regulators and on examination reports made by functional regulators, unless circumstances described in the act exist. Among other things, GLBA specifically directs the Federal Reserve and OTS, to the fullest extent possible, to use the reports of examinations of depository institutions made by the appropriate federal and state depository institution supervisory authority. Also, consolidated supervisors are directed to rely, to the extent possible, on the reports of examination made of a broker-dealer, investment adviser, or insurance company by their functional regulators and defer to the functional regulators’ examinations of these entities. GLBA also provides for the sharing of information between federal consolidated supervisors and bank supervisors on the one hand and state insurance regulators on the other hand. The act authorizes these regulators to share information pertaining to the entities they supervise within a holding company. For example, with respect to the holding company, the act authorizes the Board to share information regarding the financial condition, risk management policies, and operations of the holding company and any transaction or relationship between an insurance company and any affiliated depository institution. The consolidated supervisor also may provide the insurance regulator any other information necessary or appropriate to permit the state insurance regulator to administer and enforce applicable state insurance laws. Consistent with GLBA, consolidated supervisors have negotiated MOUs or other formal information sharing agreements with functional supervisors and were reviewing reports from them. The supervisors had also entered into MOUs with relevant foreign supervisors. For example, OTS had negotiated MOUs with 48 state insurance departments, 7 foreign supervisors, and with the EU. Similarly, the Federal Reserve has a number of MOUs with regulators. One provides for SEC to share information concerning broker-dealer examinations for broker-dealers owned by financial holding companies. Most MOUs include agreements to share information on an informal basis. For example, the Federal Reserve and SEC have a “pilot program” that allows the Federal Reserve to share information on a particular holding company with SEC staff on an ongoing basis. Examination information from the functional supervisors was being provided to the consolidated supervisor, and to some extent, the consolidated supervisor was relying on that information in planning and reporting. Supervisors do communicate when developing holding company supervisory programs. For example, staff at SEC, especially in OCIE, noted that they communicated regularly with the supervisory management at the Federal Reserve Bank of New York when setting up their CSE program. In addition, the agencies gave us examples of when they communicated with regard to specific issues, and the Federal Reserve and SEC have taken opportunities to learn from the firms under each other’s jurisdiction. SEC said the Federal Reserve had asked to meet with some of CSEs regarding peer valuation, and SEC had facilitated such meetings. Following the enactment of GLBA, OCC and the Federal Reserve agreed on how they would coordinate in the supervision of LCBOs. While some duplication remains, we found examples of that agreement being implemented. For example, OCC and the Federal Reserve share supervisory planning documents for LCBOs when OCC is the primary bank supervisor for the lead bank in the bank holding company. As a result, the Federal Reserve is able to factor OCC’s planned work into its supervisory plan process. In addition, we found that OCC and Federal Reserve examiners at some institutions shared information informally over the course of the examination cycle, allowing them to conduct joint or shared target examination activities that might not have been part of the original plan. OCC examiners told us they are now also receiving information about the Federal Reserve’s horizontal reviews in a timelier manner and can thus make better decisions about the extent to which they want to participate in those reviews. Federal Reserve officials said that when OCC has conducted examination activities related to horizontal reviews, they rely on OCC’s information. OCC and Federal Reserve examiners also told us that when they disagree on examination findings, they attempt to work out those disagreements before presenting conflicting information to management. Finally, OCC and Federal Reserve examiners jointly attend meetings with management and the Boards of Directors of the financial institutions where they have primary and consolidated supervisory responsibilities. They invite other relevant bank examiners to attend some of these meetings as well. Finally, the Federal Reserve provides OCC and FDIC full online access to its supervisory database, which contains examination reports and other supervisory information for bank holding companies. The supervision of one firm, headquartered abroad but with significant U.S. operations, including substantial securities activities, is an example of coordination between the Federal Reserve, the holding company supervisor for the firm’s U.S. operations, two foreign supervisory agencies involved in the oversight of the ultimate holding company, and operations in their countries, and the SEC, which is the functional supervisor of firm’s most important U.S. operations. The Federal Reserve meets with supervisors from the other countries formally twice a year to coordinate activities. A representative of the firm said the three agencies meet jointly with representatives of the firm prior to developing a supervisory plan. The lead examiner at the Federal Reserve said that including representatives from other governments on examination teams makes it easier to access information across international borders. While SEC is not included in these meetings, the Federal Reserve and SEC agreed to a “pilot program” for the Federal Reserve to regularly share holding company information with OCIE staff that oversee the firm’s U.S. broker-dealers and investment advisers. Collectively, these efforts to coordinate do address several of the key elements of collaboration identified in table 6, above. In particular, the agreements among the supervisors provide a basis for joint strategies, for agreements on roles and responsibilities, and for operating across agency boundaries. Joint examination activities between the Federal Reserve and OCC, for instance, address these elements and are a way to leverage resources. Similarly, coordination between SEC offices and the Federal Reserve promote efforts to learn from each other despite agency boundaries. Opportunities remain for the agencies to collaborate more systematically, however, and thus enhance their ability to provide effective and consistent oversight when they share responsibility for a holding company and its subsidiaries. More consistent collaboration between OCC as the lead bank examiner and the Federal Reserve as the holding company supervisor, for instance, would allow the agencies to take advantage of opportunities to supervise some large, complex, banking organizations as effectively and efficiently as possible. Conducting some examinations and meetings on a joint basis—the solution adopted by the Federal Reserve and OCC—is a positive step but does not ensure that the agencies develop consistent mechanisms to evaluate the results of joint examinations or to judge the extent to which such examinations or other approaches lessen duplication, promote consistency, or otherwise enable more efficient supervision. In addition, we found that coordination between these agencies did not always run smoothly. OCC examiners at some of the institutions we reviewed and officials at headquarters told us that they see some coordination issues, especially with regard to the horizontal examinations the Federal Reserve conducts across some systemically important institutions. OCC examiners at one LCBO said that some cases could lead to the Federal Reserve and OCC providing inconsistent feedback to the firm. They also noted that when the Federal Reserve collects information for these examinations, they do not always rely on OCC for that information when OCC is the primary bank examiner of the lead bank. Finally, while OCC and the Federal Reserve follow the procedures they have laid out for resolving differences, the potential still exists for the two to give conflicting information to management. We found one firm that had initially received conflicting information from the Federal Reserve, its consolidated supervisor, and OCC, its primary bank supervisor, about sufficient business continuity provisions. While the holding company supervisor for thrift holding companies (OTS) or CSEs (SEC) is often the supervisor of the dominant regulated subsidiary, opportunities to reduce regulatory burden and improve accountability through better collaboration continue to exist. While an OTS official told us that one of the main responsibilities of a holding company supervisor is to improve efficiency by serving as a source of information about the holding company to the functional supervisors, this opportunity to leverage information is not fully utilized. FDIC examiners, for instance, could collect information on the organizational structure of the holding company from OTS, but obtained this information from bank officials when examining an ILC that was part of a thrift holding company. In other instances, OTS and the Federal Reserve have taken some steps to work collaboratively with other supervisors in supervising a particular firm, but the results are incomplete. A decision by the United Kingdom’s Financial Services Authority to include the German and French regulators in a meeting with OTS led OTS to call a November 2005 meeting that included a broader range of supervisors. OTS officials said they invited insurance, FDIC, and SEC supervisors in the United States. Officials at the company told us, however, that FDIC did not attend the 2005 meeting because the meeting had been arranged hastily. OTS held a similar meeting in November 2006, and FDIC staff attended this meeting; SEC, however, did not attend and senior staff at Market Regulation and OCIE told us they were unaware that SEC had been invited. Similarly, as noted above, the Federal Reserve has sometimes engaged in integrated examination activities with foreign supervisors, but these did not consistently include other relevant U.S. supervisors. The agencies did not always have consistent approaches to minimizing regulatory burden and improving accountability through collaboration. For example, as noted above, the Federal Reserve mitigates challenges posed by its decentralized structure by creating processes such as reviewing the plans and findings of LCBOs and centralizing the staffing systems. However, these processes may make collaboration more difficult. An OCC official told us that the complex review process at the Federal Reserve sometimes kept OCC from providing formal results to management on a timelier basis when the two agencies conducted joint examinations. Further, the planning cycles are not always consistent across the agencies. While OCC and the Federal Reserve considered each others’ schedules or examination plans when developing their plans for bank holding companies where OCC is the lead bank supervisor, not all agencies do so. For example, at the institutions included in our study, there was little or no indication that FDIC had coordinated the examinations of ILCs with relevant holding company supervisors. Finally, bank regulators noted another barrier to full collaboration—that the board of the bank is legally liable for the safety and soundness of the bank regardless of the status of the holding company. FDIC officials specifically noted that the interests of the bank’s management, including its legal responsibilities, and those of the holding company might diverge when one or the other is in danger of failing. Similarly, at that time the interests of the holding company supervisor and the primary and secondary bank supervisor might diverge as well. Federal Reserve officials noted, however, that risks would be lessened to the extent that the objectives of the consolidated supervisor and the primary bank supervisor are the same (e.g., to preserve the safety and soundness of insured depository institutions) and the consolidated supervisor takes action to prevent the holding company from taking actions that are deleterious to its insured depository institutions. Collaboration between the banking agencies and SEC is hindered by cultural differences and concerns about sharing information. Bank supervisory officials noted that they were sometimes concerned about sharing information with SEC because of their compliance, as opposed to prudential supervision, culture. One official said the Federal Reserve does not want to be perceived as a fact finder for the SEC when it comes to consolidated financial information. If they were perceived in that way, he said, management at financial holding companies may be less willing to share certain types of confidential information with their holding company supervisors. SEC and Federal Reserve officials noted that SEC may not have the same formal legal safeguards as bank supervisors have with regard to the confidentiality of the information. The impediments to sharing information at SEC are evident internally as well where, as discussed above, Market Regulation has not always shared firm risk management information with OCIE. Oversight of complex organizations that are primarily insurance companies pose special collaborative challenges. As noted above, GLBA directed consolidated supervisors to take certain actions to promote the exchange of information between consolidated supervisors and relevant state insurance supervisors, and we found that MOUs had been negotiated and some communication was taking place. The states have also taken some actions to oversee insurance companies on a group basis. According to NAIC, most states have adopted a version of the NAIC model laws concerning holding company supervision, and NAIC has developed a framework for holding company supervision. Within that framework, which promotes the assessment of risks and controls at the holding company level, lead state supervisors conduct the examination. These examiners are advised to identify and communicate with the relevant functional supervisors for the holding company. The framework also recommends that insurance examiners notify the Federal Reserve if the institution is a financial holding company. However, only one major insurer is a financial holding company, while a significant minority of the large, complex thrift holding companies have significant insurance operations, but the guidance does not recommend that examiners contact OTS as a holding company supervisor. NAIC officials said that while they participated in the EU evaluation process of the U.S. consolidated supervisory framework, they do not believe that insurance supervisors have been involved in the equivalency determinations for the specific companies. As a result of consolidated supervisors and those of the regulated entities in these complex holding companies not consistently adopting practices associated with systematic collaboration, U.S. supervisory agencies may be missing opportunities to better ensure effective, efficient supervision of individual financial services firms. The agencies also have not developed methods to evaluate the joint efforts that they do have under way, thus hindering their efforts to avoid duplication. Further, since they have not consistently established compatible policies and supervisory approaches, the agencies have missed opportunities to make sure they are treating firms consistently. While the three consolidated supervisors have some mechanisms in place to share information and supervisory approaches, opportunities remain for them to collaborate more systematically to promote greater consistency, particularly in oversight of large, complex firms. While these firms’ product offerings generally are similar, ensuring regulatory consistency remains an ongoing challenge. In particular, the OTS and SEC have overlapping responsibilities at some CSEs that own or control thrifts. Further, the agencies do not consistently work in a collaborative manner to identify the potential for defining and articulating a common goal, such as identifying regulatory best practices for consolidated supervision or identifying emerging risks that would confront all financial services firms. Three of the five securities firms that have obtained CSE status are also thrift holding companies. As a result, these firms have two consolidated supervisors and no mechanism has been developed to limit the potential for duplicative activities and conflicting findings or to assess accountability for various supervisory activities. In the preamble to the final CSE rule, SEC acknowledged the potential for duplication and conflict for some firms. The rule reduces this potential by, among other things, providing that SEC will rely on the Federal Reserve’s consolidated supervision of financial holding companies and on consolidated supervision by other holding company supervisors under circumstances the SEC determines to be appropriate. Currently, SEC has not determined that consolidated supervision of thrift holding companies by OTS satisfies SEC’s supervisory concerns with respect to CSEs that are thrift holding companies. SEC says that where both it and OTS are the consolidated supervisors, the firms are primarily securities firms with small thrift subsidiaries. In addition, SEC examiners told us that the major risks for these firms are outside the thrift and other banking subsidiaries and that OTS had not been examining these activities. When thrifts are included in bank holding companies, the law dictates that at the holding company level, the Federal Reserve is solely responsible for holding company supervision. However, no such mechanism exists for firms that are thrift holding companies who have opted to become CSEs, and OTS, which notes the growing importance of the thrift in two of the three institutions, has not chosen to defer to SEC’s consolidated supervision. SEC and OTS officials recognize this issue but have not yet met to resolve it. Supervisors in all three agencies have recognized the importance of allocating scarce resources to the areas of greatest risk and have adopted some risk-based supervisory policies and procedures. However, the agencies have not consistently adopted mechanisms to look at risk collaboratively, recognizing that financial risks are not neatly aligned with agency jurisdiction. The extent to which these risks cut across regulatory boundaries was highlighted in our work on Long-Term Capital Management (LTCM), a large hedge fund. Federal financial regulators did not identify the extent of weaknesses in banks’ and securities and futures firms’ risk management practices until after LTCM’s near collapse. Until LTCM’s near collapse, they said they believed that creditors and counterparties were appropriately constraining hedge funds’ leverage and risk taking. However, examinations done after LTCM’s near collapse revealed weaknesses in credit risk management by banking and securities firms that allowed LTCM to become too large and leveraged. The existing regulatory approach, which focuses on the condition of individual institutions, did not sufficiently consider systemic threats that can arise from nonregulated entities, such as LTCM. Similarly, information periodically received from LTCM and its creditors and counterparties did not reveal the potential threat posed by LTCM. However, the agencies did not have a strategy to collaboratively identify and resolve problems such as this, delaying identification of shared issues and work toward their resolution. In addition, there are limited mechanisms to allow agencies to share and leverage resources when one agency has unique capabilities or lacks specialized resources. To some extent, agencies share expertise and resources when they jointly conduct examinations or when they meet periodically to share information. However, no mechanism exists for sharing expertise in other situations. This is important for OTS, which is characterized by a disparity between the size of the agency and the diverse firms it oversees. While OTS recognizes its need for staff with specialized skills to oversee some of these firms, the small number of firms in some categories, combined with the small overall size of the agency, limits its ability to have any depth in those skill areas. For example, while OTS oversees a number of holding companies that are primarily in the insurance business, it has only one specialist in this area. At the same time, the Federal Reserve has a number of insurance specialists but oversees only one firm that is primarily in the insurance business. However, there is no systematic process for sharing insurance expertise between the two agencies. As financial institutions have grown, become more complex and internationally active, and adopted enterprisewide risk management practices, consolidated supervision has become more important. For certain large complex firms, U.S. supervisors have or are adopting some of the “best practices” associated with consolidated supervision, as evidenced in part by the determination of equivalence by EU supervisors. However, U.S. supervisors could perform consolidated supervision more efficiently and effectively by adopting management practices in the areas of performance management and collaboration. These practices are particularly important in helping to ensure consistent treatment of financial services holding companies and in clearly defining accountability for providing consolidated supervision. Consistent rules, consistently applied, and clear accountability are important because of the decentralized internal structures the agencies use to develop and implement policies related to consolidated supervision and the generally fragmented structure of the U.S. regulatory system. The first step in any effectively managed organization is to have well- articulated objectives, strategies, and performance measures. While these agencies have developed and largely implemented policies or strategies for consolidated supervision, these strategies could be improved through the development of more well-articulated, specific objectives and measurable outcomes. Defining specific, measurable objectives for the consolidated supervision programs is an inherently difficult task for financial services supervisors but is a key component of assessing how consolidated supervision adds to the functional supervision of banks, thrifts, broker- dealers, and insurers. Better-articulated objectives will also help to ensure that supervisors treat firms equitably and that firms receive consistent feedback. SEC has developed a draft statement of objectives and performance measures for the CSE program intended to facilitate that assessment. If approved, this would be particularly important because differences in orientation and policies and communication weaknesses among different organizational components of SEC exacerbate the difficulty of taking on the new responsibilities inherent in the CSE program. Without formal guidance that delineates the responsibilities and identifies strategies and performance measures for the divisions and offices, resources will not be used as effectively as they might be. Another key facet of effectively managed organizations or systems is the degree to which the various components collaborate and integrate their processes. While the agencies do exchange information, they have opportunities to improve collaboration. We have noted in the past that it is difficult to collaborate within the fragmented U.S. regulatory system and have recommended that Congress modernize or consolidate the regulatory system. However, under the current system, the agencies have opportunities to collaborate systematically and thus ensure that institutions operating under the oversight of multiple financial supervisors receive consistent guidance and face minimal supervisory burden. The agencies have taken some steps, particularly in the case of some specific holding companies, to work more collaboratively and thus ensure consistent supervisory treatment. These steps include joint supervisory meetings, including foreign supervisors, to develop common examination approaches. We are recommending that the Federal Reserve, Office of Thrift Supervision, and Securities and Exchange Commission take the following seven actions, as appropriate: To better assess their agencies’ achievements as consolidated supervisors, the Chairman of the Federal Reserve System’s Board of Governors, the Director of the Office of Thrift Supervision, and the Chairman of the Securities and Exchange Commission should direct their staffs to develop program objectives and performance measures that are specific to their consolidated supervision programs. To ensure they are promoting consistency with primary bank and functional supervisors and are avoiding duplicating the efforts of these supervisors, the Chairman of the Federal Reserve System’s Board of Governors, the Director of the Office of Thrift Supervision, and the Chairman of the Securities and Exchange Commission should also direct their staffs to identify additional ways to more effectively collaborate with primary bank and functional supervisors. Some of the ways they might consider accomplishing this include ensuring common understanding of how the respective roles and responsibilities of primary bank and functional supervisors and of consolidated supervisors are being applied and defined in decisions regarding the examination and supervision of institutions; and developing appropriate mechanisms to monitor, evaluate, and report jointly on results. To take advantage of the opportunities to promote better accountability and limit the potential for duplication and regulatory gaps, the Chairman of the Federal Reserve System’s Board of Governors, the Director of the Office of Thrift Supervision, and the Chairman of the Securities and Exchange Commission should foster more systematic collaboration among their agencies to promote supervisory consistency, particularly for firms that provide similar services. In particular, the Chairman of the Securities and Exchange Commission and the Director of the Office of Thrift Supervision should jointly clarify accountability for the supervision of the consolidated supervised entities that are also thrift holding companies and work to reduce the potential for duplication. To address certain practices that are specific to an agency, we recommend the following: the Chairman of the Securities and Exchange Commission direct SEC staff to develop and publicly release explicit written guidance for supervision of Consolidated Supervised Entities. This guidance should clarify the responsibilities and activities of the Office of Compliance Inspections and Examinations and the Division of Market Regulation’s responsibilities for administering the Consolidated Supervised Entity program. the Director of the Office of Thrift Supervision direct OTS staff to revise the CORE supervisory framework to focus more explicitly and transparently on risk management and controls so that it more effectively captures evolving standards for consolidated supervision and is more consistent with activities of other supervisory agencies and facilitates consistent treatment of OTS’s diverse population of holding companies. The Chairman of the Federal Reserve direct Federal Reserve Board and District Bank staff to look for ways to further reduce operational differences in bank supervision among the District Banks, such as developing additional guidance related to developing and communicating examination findings. We requested comments on a draft of this report from the Federal Reserve, OTS, and SEC. We received written comments from the Chairman of the Board of Governors of the Federal Reserve System, the Director of the Office of Thrift Supervision, and the Chairman of the Securities and Exchange Commission. Their letters are summarized below and reprinted in appendixes III, IV, and V, respectively. The Chairman of the Board of Governors of the Federal Reserve System noted that the Federal Reserve’s program for consolidated supervision continues to evolve in light of changes in the structure, activities, risks, and risk management techniques of the banking industry. He concurred with the importance of clear and consistent objectives for each supervisory program and accurate performance measures, as well as noted that the Federal Reserve has already charged its management committees, comprised of Board and Reserve Bank officials, to further define and implement more specific objectives and performance measures for each of its supervision business lines. He also agreed that it was appropriate for the Federal Reserve to consider whether additional opportunities exist to promote effective collaboration among the Federal financial supervisory agencies and that the Federal Reserve would continue to work to ensure that the agencies share information and avoid duplication of supervisory effort. The Director of the Office of Thrift Supervision agreed with our characterization of how OTS’s consolidated supervision program, especially for large, complex firms operating on a global basis, has evolved in recent years. OTS wrote that initiatives for these firms (that are described in the report) will ensure that it implements the principles of accountability and supervisory collaboration recommended in the report. With regard to consolidated supervision of other firms OTS wrote that it implements its holding company authority in a broader and deeper manner than indicated in our draft report. In response to our recommendation that OTS’s CORE framework more explicitly focus on risk management, the Director reiterated that the CORE approach is explicitly designed to understand, analyze and evaluate the firm’s risk appetite and its approach to risk management, however, he said that OTS is considering substantive revisions to the framework to further sharpen this focus on risk. We agree that revisions to sharpen the focus on the CORE framework are appropriate. We also agree that OTS’s holding company authority is broad and deep and that OTS has sought to understand the risk management approaches of the holding companies it supervises, but we continue to believe that OTS should focus more explicitly and transparently on risk management and controls to ensure it provides consistent treatment of its holding companies. The Director also wrote that the report correctly points out that OTS and SEC conduct consolidated supervision activities in some of the same firms. Further, he wrote that the report cites views of SEC staff that are incorrect, specifically that the firms in question have small thrifts and that the major risks for these firms are outside the thrift and other banking subsidiaries. The Director wrote that the regulated thrift institutions are sizeable and significant in at least two of these firms with assets of more than $14 billion and $19 billion in 2006 and that reviews thoroughly evaluate holding companies and their risks on a consolidated basis. Our report discussed the overlapping responsibilities of OTS and SEC with regard to several CSEs that, because of their ownership of thrifts, are also thrift holding companies. We did not offer a judgment as to which agency should appropriately have the primary responsibility, but we did recommend that the Director and the Chairman of the Securities and Exchange Commission clarify accountability for such supervisory responsibility. The Director of OTS said that he intends to meet with the Chairman of SEC to discuss this issue. In response to our recommendation that the agencies identify additional ways to collaborate, the Director wrote that the differences between the holding companies overseen by the Federal Reserve and OTS would make it difficult to achieve perfect consistency but that OTS would continue to seek ways to align its process with best regulatory practice. In response to our recommendation that the agencies foster more systematic collaboration, he wrote that OTS remains committed to an open and inclusive approach and is willing to work with relevant supervisors to ensure there are not gaps in the review of firms subject to consolidated supervision. Finally, responding to our recommendation for program objectives and performance measures that are specific to consolidated supervision, the Director agreed that clear objectives and performance measures greatly assist in evaluating the success of any supervisory program, and that the agency will continue to assess ways to ensure the program is focused, disciplined, and equal to the task of holding company supervision. The Chairman of the Securities and Exchange Commission wrote that SEC recognized that the establishment of a prudential consolidated supervision program for investment bank holding companies represents a significant expansion of the Commission’s activities and responsibilities. The Chairman further wrote that SEC had built a prudential regime that is generally consistent with the oversight that is provided to bank holding companies but that SEC also takes into account the different risk profiles and business mixes that distinguish investment bank holding companies from bank holding companies. In response to our recommendation regarding coordination within SEC, the Chairman, with the unanimous support of his fellow Commissioners, subsequently wrote that he is transferring the responsibilities for on-site testing of CSE holding company controls to the Division of Market Regulation so that the expertise related to the prudential supervision of securities firms will be concentrated there. In addition, the Chairman wrote that he will allocate additional positions to the Division of Market Regulation to carry out its increased responsibilities, and he has directed staff there to provide greater transparency with regard to the aims and methods of the program by posting additional information about its components on SEC’s Web site. We also received separate technical comments on the draft report from the staffs of the Federal Reserve and SEC, as well as from FDIC and OCC; we have incorporated their comments into the report, as appropriate. We are sending copies of this report to other interested congressional committees and to the Chairman of the Board of Governors of the Federal Reserve System, the Director of the Office of Thrift Supervision, and the Chairman of the Securities and Exchange Commission. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or hillmanr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are acknowledged in appendix VI. Our objectives were to (1) describe the policies and approaches U.S. consolidated supervisors use to oversee large and small holding companies in the financial services industry; (2) review the supervisory agencies’ management of its consolidated supervision program, including program objectives and performance measures; and (3) evaluate how well consolidated supervisors are collaborating with other supervisors and each other. We conducted our work between November 2005 and February 2007 in accordance with generally accepted government auditing standards in Washington, D.C.; Boston; and other locations where financial institutions are headquartered. To meet our objectives and to better understand how the three consolidated supervisors—the Federal Reserve System (Federal Reserve), Office of Thrift Supervision (OTS), and Securities and Exchange Commission (SEC)—operate their consolidated supervision programs for large, complex, firms, we selected a number of large firms that were supervised by each of the three agencies on a consolidated basis. These firms had at least one of the following characteristics: (1) major international operations so that they were subject to the European Union’s Financial Conglomerates Directive or were headquartered abroad, (2) a variety of businesses (i.e., insurance, banking, and securities) that were subject to significant supervision by primary bank and functional supervisors or unregulated subsidiaries, and (3) one or more consolidated supervisors. Before finalizing our selection of firms, we held discussions with the three agencies to obtain their views on the firms we had selected. From this selection process, we chose a total of 14 firms, 6 U.S. bank holding companies, 1 foreign bank with substantial U.S. operations, 4 thrift holding companies that did not have another consolidated supervisor, and 3 consolidated supervised entities (CSE) that were also thrift holding companies. We interviewed officials from some of the selected firms to obtain their views on the benefits and the costs of consolidated supervision. Specifically, to describe the policies and approaches used by U.S. consolidated supervisors—Federal Reserve, OTS, and SEC—we reviewed the Bank Holding Company Act of 1956, the Gramm-Leach-Bliley Act, the Home Owners Loan Act of 1933, and SEC’s Alternative Net Capital Rule for CSEs. We also reviewed the Federal Reserve Board’s Bank Holding Company Supervision Manual and some of the Board’s Division of Banking Supervision and Regulation Letters on large, complex banking organizations; SEC’s regulations establishing the CSE program and examination modules specific to that program; and OTS’s Holding Companies Handbook and Examination Handbook for thrifts. In addition, we reviewed recent Federal Reserve supervisory plans and some reports of targeted reviews. For the three SEC-supervised firms, we reviewed their applications to become CSEs and ongoing supervisory materials, such as monthly risk reports and cross-firm reviews, as well as the results of one CSE examination completed during our review. Because these firms were also thrift holding companies, we reviewed OTS holding company examination reports for them. For the other OTS-supervised firms, we reviewed holding company and thrift examination reports and supervisory planning documents when these were available. In addition to the 14 large firms, we reviewed a few supervisory documents for smaller holding companies supervised by the Federal Reserve and OTS. To review the supervisory agencies’ management of its consolidated supervision program, we reviewed recent strategic and performance plans from the three agencies. Where relevant, we also reviewed agency testimonies and budget documents. In addition, we reviewed agency guidance specific to consolidated supervision to determine whether program objectives and performance measures were included. We interviewed officials at the three agencies and examiners who were responsible for the supervision of the selected firms on what they considered the goals or benefits of consolidated supervision to be. In addition, we collected information on the operations review program that the Federal Reserve developed for its supervision programs. To evaluate how well consolidated supervisors are collaborating with other supervisors and each other, we identified practices for effective collaboration from our previous work on collaboration. We also interviewed officials from the three agencies on their efforts to collaborate with each other and with primary bank and functional supervisors overseeing subsidiaries in the holding companies they oversee. In addition, we reviewed examination reports of some of the subsidiaries owned by the 14 holding companies we selected. These included examination reports from the Federal Deposit Insurance Corporation, Office of the Comptroller of the Currency, and the New York Stock Exchange. We also interviewed officials and examiners involved in the oversight of the primary banks or functional entities within some of the 14 firms. To gain an international perspective on consolidated supervision and a better understanding of the European Union’s Financial Conglomerates directive, we spoke to supervisors in two other countries and reviewed documents from a variety of international sources. Specifically, we spoke with Canada’s Office of the Superintendent of Financial Institutions and the United Kingdom’s Financial Services Authority. We also reviewed documents from these supervisory bodies as well as other international sources, including the Basel Committee on Banking Supervision, the Joint Forum, and the European Union. The following is excerpted from the Basel Committee on Banking Supervision’s “Core Principles Methodology,” available at http://www.bis.org/publ/bcbs130.pdf. Principle 24: Consolidated supervision An essential element of banking supervision is that supervisors supervise the banking group on a consolidated basis, adequately monitoring and, as appropriate, applying prudential norms to all aspects of the business conducted by the group worldwide. 1. The supervisor is familiar with the overall structure of banking groups and has an understanding of the activities of all material parts of these groups, domestic and cross-border. 2. The supervisor has the power to review the overall activities of a banking group, both domestic and cross-border. The supervisor has the power to supervise the foreign activities of banks incorporated within its jurisdiction. 3. The supervisor has a supervisory framework that evaluates the risks that non-banking activities conducted by a bank or banking group may pose to the bank or banking group. 4. The supervisor has the power to impose prudential standards on a consolidated basis for the banking group. The supervisor uses its power to establish prudential standards on a consolidated basis to cover such areas as capital adequacy, large exposures, exposures to related parties and lending limits. The supervisor collects consolidated financial information for each banking group. 5. The supervisor has arrangements with other relevant supervisors, domestic and cross-border, to receive information on the financial condition and adequacy of risk management and controls of the different entities of the banking group. 6. The supervisor has the power to limit the range of activities the consolidated group may conduct and the locations in which activities can be conducted; the supervisor uses this power to determine that the activities are properly supervised and that the safety and soundness of the bank are not compromised. 7. The supervisor determines that management is maintaining proper oversight of the bank’s foreign operations, including branches, joint ventures and subsidiaries. The supervisor also determines that banks’ policies and processes ensure that the local management of any cross- border operations has the necessary expertise to manage those operations in a safe and sound manner and in compliance with supervisory and regulatory requirements. 8. The supervisor determines that oversight of a bank’s foreign operations by management (of the parent bank or head office and, where relevant, the holding company) includes: (i) information reporting on its foreign operations that is adequate in scope and frequency to manage their overall risk profile and is periodically verified; (ii) assessing in an appropriate manner compliance with internal controls; and (iii) ensuring effective local oversight of foreign operations. For the purposes of consolidated risk management and supervision, there should be no hindrance in host countries for the parent bank to have access to all the material information from their foreign branches and subsidiaries. Transmission of such information is on the understanding that the parent bank itself undertakes to maintain the confidentiality of the data submitted and to make them available only to the parent supervisory authority. 9. The home supervisor has the power to require the closing of foreign offices, or to impose limitations on their activities, if: it determines that oversight by the bank and/or supervision by the host supervisor is not adequate relative to the risks the office presents; and/or it cannot gain access to the information required for the exercise of supervision on a consolidated basis. 10. The supervisor confirms that oversight of a bank’s foreign operations by management (of the parent bank or head office and, where relevant, the holding company) is particularly close when the foreign activities have a higher risk profile or when the operations are conducted in jurisdictions or under supervisory regimes differing fundamentally from those of the bank’s home country. 1. For those countries that allow corporate ownership of banking the supervisor has the power to review the activities of parent companies and of companies affiliated with the parent companies, and uses the power in practice to determine the safety and soundness of the bank; and the supervisor has the power to establish and enforce fit and proper standards for owners and senior management of parent companies. 2. The home supervisor assesses the quality of supervision conducted in the countries in which its banks have material operations. 3. The supervisor arranges to visit the foreign locations periodically, the frequency being determined by the size and risk profile of the foreign operation. The supervisor meets the host supervisors during these visits. The supervisor has a policy for assessing whether it needs to conduct on-site examinations of a bank’s foreign operations, or require additional reporting, and has the power and resources to take those steps as and when appropriate. In addition to the contact named above, James McDermott, Assistant Director; Jason Barnosky; Nancy S. Barry; Lucia DeMaio; Nancy Eibeck; Marc W. Molino; Paul Thompson; and Barbara Roesmann also made key contributions to this report. Risk-Based Capital: Bank Regulators Need to Improve Transparency and Address Impediments to Finalizing the Proposed Basel II Framework. GAO-07-253. Washington, D.C.: Feb. 15, 2007. Industrial Loan Corporations: Recent Asset Growth and Commercial Interest Highlight Differences in Regulatory Authority. GAO-06-961T. Washington, D.C.: July 12, 2006. Results-Oriented Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies. GAO-06-15. Washington, D.C.: October 21, 2005. Industrial Loan Corporations: Recent Asset Growth and Commercial Interest Highlight Differences in Regulatory Authority. GAO-05-621. Washington, D.C.: September 15, 2005. 21st Century Challenges: Reexamining the Base of the Federal Government. GAO-05-325SP. Washington, D.C.: February 1, 2005. Financial Regulation: Industry Changes Prompt Need to Reconsider U.S. Regulatory Structure. GAO-05-61. Washington, D.C.: October 6, 2004. Internal Control Management and Evaluation Tool. GAO-01-1008G. Washington, D.C.: August 1, 2001. Managing for Results: Barriers to Interagency Coordination. GAO/GGD- 00-106. Washington, D.C.: March 29, 2000. Responses to Questions Concerning Long-Term Capital Management and Related Events. GAO/GGD-00-67R. Washington, D.C.: February 23, 2000. Risk-Focused Bank Examinations: Regulators of Large Banking Organizations Face Challenges. GAO/GGD-00-48. Washington, D.C.: January 24, 2000. Standards for Internal Control in the Federal Government. GAO/AIMD- 00-21.3.1. Washington, D.C.: November 1, 1999. Long-Term Capital Management: Regulators Need to Focus Greater Attention on Systemic Risk. GAO/GGD-00-3. Washington, D.C.: October 29, 1999.
As financial institutions increasingly operate globally and diversify their businesses, entities with an interest in financial stability cite the need for supervisors to oversee the safety and soundness of these institutions on a consolidated basis. Under the Comptroller General's Authority, GAO reviewed the consolidated supervision programs at the Federal Reserve System (Federal Reserve), Office of Thrift Supervision (OTS), and Securities and Exchange Commission (SEC) to (1) describe policies and approaches that U.S. consolidated supervisors use to oversee large and small holding companies; (2) review the management of the consolidated supervision programs, including use of program objectives and performance measures; and (3) evaluate how well consolidated supervisors are collaborating with other supervisors and each other in their activities. In conducting this study, GAO reviewed agency policy documents and supervisory reports and interviewed agency and financial institution officials. The Federal Reserve, OTS, and SEC have responded to the dramatic changes in the financial services industry and for many of the largest financial services firms the agencies focus on the firms' consolidated risks, controls, and capital. Reflecting in part differences in structure, traditional roles and responsibilities, and the length of time they have had to develop and refine their programs, the agencies employ somewhat differing policies and approaches for their consolidated supervision programs. Consolidated supervision becomes more important in the face of changes in the financial services industry, particularly with respect to the increased importance of enterprise risk management by large, complex financial services firms. Consolidated supervision provides a basis for the supervisors to oversee the risks of financial services firms on the same level that the firms manage those risks. GAO found that while all of these agencies were meeting international standards for effective oversight of large, internationally active conglomerates and have broad goals for supervision, they could more clearly articulate the specific objectives and performance measures for their evolving consolidated supervision programs. Both Federal Reserve and OTS, for example, focus on the safety and soundness of the depository institution but could take steps to better measure how consolidated supervision contributes to this in ways that differ from primary supervision of the depository institution. Such objectives and measures would help the agencies ensure consistent treatment of the firms that are subject to consolidated supervision. More effective collaboration can occur if agencies take a more systematic approach to agreeing on roles and responsibilities and establishing compatible goals, policies, and procedures on how to use available resources as efficiently as possible. While the three agencies coordinate and exchange information, they could take a more systematic approach to collaboration with respect to their consolidated supervision programs. For instance, SEC and OTS have authority for some of the same firms with no effective mechanism to prevent duplication, assign accountability, or resolve potential conflicts. Similarly, while the Federal Reserve and other federal bank supervisory agencies have taken steps to share information and examination activities when the Federal Reserve is not the primary supervisor of the lead bank in a bank holding company, some duplication and lack of accountability remain. As a result, consolidated supervision of U.S. financial institutions is not as efficient and effective as it could be if agencies collaborated more systematically. GAO has noted in the past that it is difficult to collaborate within the fragmented U.S. regulatory system and has recommended that Congress modernize or consolidate the regulatory system. However, if the current system is maintained, it is increasingly important for agencies to collaborate to ensure effective and efficient consolidated supervision, consistent treatment of financial services firms, and clear accountability of the agencies for their supervisory activities.
Among other protections, HIPAA’s standards for health coverage, access, portability, and renewability guarantee access to coverage for certain employees and individuals, prohibit carriers from refusing to renew coverage on the basis of a person’s health status, and place limits on the use of preexisting condition exclusion periods. However, not all standards apply to all markets or individuals. For example, guarantees of access to coverage for employers apply only in the small-group market, and the individual market guarantee applies only to certain eligible individuals who lose group coverage. (The appendix contains a summary of these standards by market segment.) ensuring that group health plans comply with HIPAA standards, which is an extension of its current regulatory role under the Employee Retirement Income Security Act of 1974 (ERISA). Treasury also enforces HIPAA requirements on group health plans but does so by imposing an excise tax under the Internal Revenue Code on employers or plans that do not comply with HIPAA. HHS is responsible for enforcing HIPAA with respect to insurance carriers in the group and individual markets, but only in states that do not already have similar protections in place or do not enact and enforce laws to implement HIPAA standards. This represents an essentially new role for that agency. The implementation of HIPAA is ongoing, in part, because the regulations were issued on an “interim final” basis. Further guidance needed to finalize the regulations has not yet been issued. In addition, various provisions of HIPAA have different effective dates. Most of the provisions became effective on July 1, 1997, but group-to-individual guaranteed access in 36 states and the District of Columbia had until January 1, 1998, to become effective. And although all provisions are now in effect, individual group plans do not become subject to the law until the start of their plan year on or after July 1, 1997. For some collectively bargained plans, this may not be until 1999 or later, as collective bargaining agreements may extend beyond 12 months. During the first year of implementation, federal agencies, the states, and issuers have taken various actions in response to HIPAA. In addition to publishing interim final regulations by the April 1, 1997, statutory deadline, Labor and HHS have conducted educational outreach activities. State legislatures have enacted laws to implement HIPAA provisions, and state insurance regulators have written regulations and prepared to enforce them. Issuers of health coverage have modified their products and practices to comply with HIPAA. To ensure that individuals losing group coverage have guaranteed access—regardless of health status—to individual market coverage, HIPAA offers states two different approaches. The first, which HIPAA specifies, is commonly referred to as the “federal fallback” approach and requires all carriers who operate in the individual market to offer eligible individuals at least two health plans. (This approach became effective on July 1, 1997.) The second approach, the so-called “alternative mechanism,” grants states considerable latitude to use high-risk pools and other means to ensure guaranteed access. (HIPAA requires states adopting this approach to implement it no later than Jan. 1, 1998.) Among the 13 states using the federal fallback approach, we found that some initial carrier marketing practices may have discouraged HIPAA eligibles from enrolling in products with guaranteed access rights. After the federal fallback provisions took effect, many consumers told state insurance regulators that carriers did not disclose the existence of a product to which the consumers had HIPAA-guaranteed access rights or, when the consumers specifically requested one, the carrier said it did not have such a product available. Also, some carriers initially refused to pay commissions to insurance agents who referred HIPAA eligibles. Insurance regulators in two of the three federal fallback states we visited told us that some carriers advised agents against referring HIPAA-eligible applicants or paid reduced or no commissions. Recently, though, this practice appears to have abated. We also found that premiums for products with guaranteed access rights may be substantially higher than standard rates. In the three federal fallback states we visited, we found rates ranging from 140 to 400 percent of the standard rate, as indicated in table 1. Anecdotal reports from insurance regulators and agents in federal fallback states suggest rates of 600 percent or more of the standard rate are also being charged. We also found that carriers typically evaluate the health status of applicants and offer healthy individuals access to their lower-priced standard products. This practice could cause HIPAA products to be purchased disproportionately by unhealthy, more costly individuals, which, in turn, could precipitate further premium increases. Carriers charge higher rates because they believe HIPAA-eligible individuals will, on average, be in poorer health, and they seek to prevent non-HIPAA-eligible individuals from subsidizing eligibles’ expected higher costs. Carriers permit or even encourage healthy HIPAA-eligible individuals to enroll in standard plans. According to one carrier official, denying HIPAA eligibles the opportunity to enroll in a less expensive product for which they qualify would be contrary to the consumers’ best interests. In any case, carriers that do not charge higher premiums to HIPAA eligibles could be subject to adverse selection. That is, once a carrier’s low rate for eligible individuals became known, agents would likely refer less healthy HIPAA eligibles to that carrier, which would put it at a competitive disadvantage. Finally, HIPAA does not specifically regulate premium rates and, with one exception, the regulations do not require a mechanism to narrow the disparity of rates for products with guaranteed access rights. The regulations offer three options for carriers to provide coverage to HIPAA-eligible individuals in federal fallback states, only one of which includes an explicit requirement to use some method of risk spreading or financial subsidy to moderate rates for HIPAA products. This limited attention to rates in the regulations, some state regulators contend, permits issuers to charge substantially higher rates for products with guaranteed access rights. expected to have guaranteed access to insurance coverage. One state reported receiving consumer calls at a rate of 120 to 150 a month, about 90 percent of which related to the group-to-individual guaranteed access provision. Similarly, an official from one large national insurer told us that many consumers believe the law covers them when it actually does not. Issuers of health coverage are concerned about the administrative burden and the unintended consequences of certain HIPAA requirements. One persistent concern has been the administrative burden and cost of complying with the requirement to issue certificates of creditable coverage to all enrollees who terminate coverage. Some issuers are concerned that certain information, such as the status of dependents on a policy, is difficult or time consuming to obtain. Some state officials are concerned that Medicaid agencies, which are also subject to the requirement, may face an especially difficult burden because Medicaid recipients tend to enroll in and disenroll from the Medicaid program frequently. This could require Medicaid agencies to issue a higher volume of certificates. Finally, issuers suggest that many of the certificates will not be needed to prove creditable coverage. Several issuers and state insurance regulators point out that portability reforms passed by most states have worked well without a certificate issuance requirement. Also, many group health plans do not contain preexisting condition exclusion clauses, and therefore the plans do not need certificates from incoming enrollees. While issuers generally appear to have complied with this requirement, some suggest that a more limited requirement, such as issuing the certificates only to consumers who request them, would serve the same purpose for less cost. National Association of Insurance Commissioners (NAIC) is concerned that if large numbers of older and less healthy individuals remain in the individual market, premiums for all individuals there could rise as a result. HIPAA’s guaranteed renewal requirements may also preclude issuers from canceling enrollees’ coverage, once they exceed eligibility limits, in insurance programs that are targeted for low-income populations. Therefore, these programs’ limited slots could be filled by otherwise ineligible individuals. Similarly, issuers could be required to renew coverage for children-only insurance products, for children who have reached adulthood—contrary to the design and intent of these products. Finally, issuers cite some HIPAA provisions that have the potential to be abused by consumers. For example, HIPAA requires group health plans to give new enrollees or enrollees switching between plans during an open enrollment period full credit for a broad range of prior health coverage. Since the law does not recognize differences in deductible levels, issuers and regulators are concerned that individuals may enroll in inexpensive, high-deductible plans while healthy and then switch to plans with comprehensive, low-deductible coverage when they become ill. Federal agencies have sought comments from industry on this matter. In a related example, because HIPAA does not permit pregnancy to be excluded from coverage as a preexisting condition, an individual could avoid the expense of health coverage and then enroll in the employer’s group plan as a late enrollee to immediately obtain full maternity benefits. Issuers contend that such abuses, if widespread, could increase the cost of insurance. State regulators have encountered difficulties implementing HIPAA provisions in instances in which federal regulations lacked sufficient clarity. Specifically, some regulators are concerned that the lack of clarity may result in various interpretations and in confusion among the many entities involved in implementation. For example, Colorado insurance regulators surveyed carriers in that state to determine how they interpreted regulations pertaining to group-to-individual guaranteed access. The survey results indicated that issuers had a difficult time interpreting the regulations and were thus applying them differently. discussed earlier, the ambiguity in the risk-spreading requirement for products available to HIPAA-eligible individuals has been cited as a factor contributing to high rates for these products, which in some states range from 140 to 600 percent or more of standard rates. Other areas in which state insurance regulators have sought additional federal guidance or clarification include use of plan benefit structure as a de facto preexisting condition exclusion period, treatment of late enrollees, market withdrawal as an exception to guaranteed renewability, and nondiscrimination provisions under group plans. Federal agency officials point to a number of factors that may explain the perceived lack of clarity or detail in some regulatory guidance. First, the statute, signed into law on August 21, 1996, required that implementing regulations be issued in less than 8 months, on April 1, 1997. Implicitly recognizing this challenge, the Congress provided for the issuance of regulations on an interim final basis. This time-saving measure helped the agencies to issue a large volume of complex regulations within the statutory deadline while also providing the opportunity to add more details or further clarify the regulations with the help of comments later received from industry and states. Therefore, some regulatory details necessarily had to be deferred until a later date. Furthermore, agency officials pointed out that in developing the regulations, they sought to balance states’ need for clear and explicit regulations with the flexibility to meet HIPAA goals in a manner best suited to each state. For example, under the group-to-individual guaranteed access requirement, states were given several options for achieving compliance. While the multiple options may have contributed to confusion in some instances, differences among the state insurance markets and existing reforms suggested to agency officials that a flexible approach was in the best interest of states. In fact, according to HHS officials, states specifically requested that regulations not be too explicit in order to allow states flexibility in implementing them. Finally, some of the regulatory ambiguities derive from ambiguities existing in the statute itself. For example, regulations concerning late enrollees closely track the language from the statute. States have the option of enforcing HIPAA’s access, portability, and renewability standards as they apply to fully insured group and individual health coverage. In states that do not pass laws to enforce these federal standards, HHS must perform the enforcement function. According to HHS officials, the agency as well as the Congress and others assumed HHS would generally not have to perform this role, believing instead that states would not relinquish regulatory authority to the federal government. However, five states—California, Massachusetts, Michigan, Missouri, and Rhode Island—reported they did not pass legislation to implement HIPAA’s group-to-individual guaranteed access provision, among other provisions, thus requiring HHS to regulate insurance plans in these states. Preliminary information suggests that up to 17 additional states have not enacted laws to enforce one or more HIPAA provisions, potentially requiring HHS to play a regulatory role in some of these states as well. HHS resources are currently strained by its new regulatory role in the five states where enforcement is under way, according to officials, and concern exists about the implications of the possible expansion of this role to additional states. Federal officials have begun to respond to some of the concerns raised during the first year of HIPAA implementation. HHS is continuing to monitor the need for more explicit risk-spreading requirements to mitigate the high cost of guaranteed access products in the individual market under the federal fallback approach. Federal officials believe a change to the certificate issuance requirement in response to issuer concerns would be premature; the officials note that the certificates also serve to notify consumers of their portability rights, regardless of whether consumers ultimately need to use the certificate to exercise those rights. As for guaranteed renewal for Medicare eligibles, federal officials interpret HIPAA to require that individuals, upon becoming eligible for Medicare, have the option of maintaining their individual market coverage. Moreover, HHS officials disagreed with the insurance industry and state regulators’ contention that sufficient numbers of individuals in poor health will remain in the individual market to affect premium prices there. nondiscrimination and late enrollment was published on December 29, 1997. This guidance clarifies how group health plans must treat individuals who, prior to HIPAA, had been excluded from coverage because of a health status-related factor. Further guidance and clarification in these and other areas is expected to follow. Finally, to address its resource constraints, HHS has shifted resources to HIPAA tasks from other activities. In its fiscal year 1999 budget request, HHS has also requested an additional $15.5 million to fund 65 new full-time-equivalent staff and outside contractor support for HIPAA-related enforcement activities. HIPAA reflects the complexity of the U.S. private health insurance marketplace. The law’s standards for health coverage access, portability, and renewability apply nationwide but must take account of the distinctive features of the small-group, large-group, and individual insurance markets, and of employees’ movements between these markets. From the drafting of regulations to the responses of issuers, implementation of this complex law has itself been complicated but has nonetheless moved forward. Notwithstanding this progress, though, participants and observers have raised concerns and noted challenges to those charged with implementing this law. Some challenges are likely to recede or be addressed in the near term. What could be characterized as “early implementation hurdles,” especially those related to the clarity of federal regulations, may be largely resolved during 1998, as federal agencies issue further regulatory guidance to states and issuers. Moreover, as states and issuers gain experience in implementing HIPAA standards, the intensity of their dissatisfaction may diminish. In any case, while criticizing the cost and administrative burden of issuing certificates of creditable coverage, issuers still seem able to comply. According to issuers and other participants in HIPAA’s implementation, HIPAA may have several unintended consequences, but predicting whether these possibilities will be realized is difficult. At this early point in the law’s history, these concerns are necessarily speculative because HIPAA’s insurance standards have not been in place long enough for evidence to accumulate. In addition, possible changes in the regulations or amendments to the statute itself could determine whether a concern about a provision’s effects becomes reality. However, two implementation difficulties are substantive and likely to persist, unless measures are taken to address them. First, in the 13 federal fallback states, some consumers are finding that high premiums make it difficult to purchase the group-to-individual guaranteed access coverage that HIPAA requires carriers to offer. This situation is likely to continue unless HHS interprets the statute to require (in federal fallback states) more explicit and comprehensive risk-spreading requirements or that states adopt other mechanisms to moderate rates of guaranteed access coverage for HIPAA eligibles. In addition, if the range of consumer education efforts on HIPAA provisions remains limited, many consumers may continue to be surprised by the limited nature of HIPAA protections or to risk losing the opportunity to take advantage of them. Second, HHS’ current enforcement capabilities could prove inadequate to handle the additional burden as the outcome of state efforts to adopt and implement HIPAA provisions becomes clearer in 1998. The situation regarding the implementation of HIPAA’s insurance standards is dynamic. As additional health plans become subject to the law, and as further guidance is issued, new problems may emerge and new corrective actions may be necessary. Consequently, because a comprehensive determination of HIPAA’s implementation and effects remains years away, continued oversight is required. Mr. Chairman, this concludes my prepared statement. I will be happy to answer your questions. To achieve its goals of improving the access, portability, and renewability of private health insurance, HIPAA sets forth standards that variously apply to the individual, small-group, and large-group markets of all states. Most HIPAA standards became effective on July 1, 1997. However, the certificate issuance standard became effective on June 1, 1997, and issuers had to provide certificates automatically to all disenrollees from that point forward as well as upon request to all disenrollees retroactive to July 1, 1996. In states that chose an alternative mechanism approach, the individual market guarantee access standard (often called “group-to-individual portability”) had until January 1, 1998, to become effective. Finally, group plans do not become subject to the applicable standards until their first plan year beginning on or after July 1, 1997. Table I.1 summarizes HIPAA’s health coverage access, portability, and renewability standards, by applicable market segment. The text following the table describes each standard. Small group (2-50 employees) Limitations on preexisting condition exclusion periodsCredit for prior coverage (portability) N/A = not applicable. HIPAA requires issuers of health coverage to provide certificates of creditable coverage to enrollees whose coverage terminates. The certificates must document the period during which the enrollee was covered so that a subsequent health issuer can credit this time against its preexisting condition exclusion period. The certificates must also document any period during which the enrollee applied for coverage but was waiting for coverage to take effect—the waiting period—and must include information on an enrollee’s dependents covered under the plan. In the small-group market, carriers must make all plans available and issue coverage to any small employer that applies, regardless of the group’s claims history or health status. Under individual market guaranteed access—often referred to as group-to-individual portability—eligible individuals must have guaranteed access to at least two different coverage options. Generally, eligible individuals are defined as those with at least 18 months of prior group coverage who meet several additional requirements. Depending on the option states choose to implement this requirement, coverage may be provided by carriers or under state high-risk insurance pool programs, among others. HIPAA requires that all health plan policies be renewed regardless of health status or claims experience of plan participants, with limited exceptions. Exceptions include cases of fraud, failure to pay premiums, enrollee movement out of a plan service area, cessation of membership in an association that offers a health plan, and withdrawal of a carrier from the market. Group plan issuers may deny, exclude, or limit an enrollee’s benefits arising from a preexisting condition for no more than 12 months following the effective date of coverage. A preexisting condition is defined as a condition for which medical advice, diagnosis, care, or treatment was received or recommended during the 6 months preceding the date of coverage or the first day of the waiting period for coverage. Pregnancy may not be considered a preexisting condition, nor can preexisting conditions be imposed on newborn or adopted children in most cases. Group plan issuers may not exclude a member within the group from coverage on the basis of the individual’s health status or medical history. Similarly, the benefits provided, premiums charged, and employer contributions to the plan may not vary within similarly situated groups of employees on the basis of health status or medical history. Issuers of group coverage must credit an enrollee’s period of prior coverage against their preexisting condition exclusion period. Prior coverage must have been consecutive, with no breaks of more than 63 days, to be creditable. For example, an individual who was covered for 6 months who changes employers may be eligible to have the subsequent employer’s plan’s 12-month waiting period for preexisting conditions reduced by 6 months. Time spent in a prior health plan’s waiting period cannot count as part of a break in coverage. Individuals who do not enroll for coverage in a group plan during their initial enrollment opportunity may be eligible for a special enrollment period later if they originally declined to enroll because they had other coverage, such as coverage under COBRA, or were covered as a dependent under a spouse’s coverage and later lost that coverage. In addition, if an enrollee has a new dependent as a result of a birth or adoption or through marriage, the enrollee and dependents may become eligible for coverage during a special enrollment period. HIPAA also includes certain other standards that relate to private health coverage, including limited expansions of COBRA coverage rights; new disclosure requirements for ERISA plans; and, to be phased in through 1999, new uniform claims and enrollee data reporting requirements. Changes to certain tax laws authorize federally tax-advantaged medical savings accounts for small employer and self-employed plans. Finally, although not included as part of HIPAA but closely related, new standards for mental health and maternity coverage became effective on January 1, 1998. Health Insurance Standards: New Federal Law Creates Challenges for Consumers, Insurers, Regulators (GAO/HEHS-98-67, Feb. 25, 1998). Medical Savings Accounts: Findings From Insurer Survey (GAO/HEHS-98-57, Dec. 19, 1997). The Health Insurance Portability and Accountability Act of 1996: Early Implementation Concerns (GAO/HEHS-97-200R, Sept. 2, 1997). Private Health Insurance: Continued Erosion of Coverage Linked to Cost Pressures (GAO/HEHS-97-122, July 24, 1997). Employment-Based Health Insurance: Costs Increase and Family Coverage Decreases (GAO/HEHS-97-35, Feb. 24, 1997). Private Health Insurance: Millions Relying on Individual Market Face Cost and Coverage Trade-Offs (GAO/HEHS-97-8, Nov. 25, 1996). Health Insurance Regulation: Varying State Requirements Affect Cost of Insurance (GAO/HEHS-96-161, Aug. 19, 1996). Health Insurance for Children: Private Insurance Coverage Continues to Deteriorate (GAO/HEHS-96-129, June 17, 1996). Health Insurance Portability: Reform Could Ensure Continued Coverage for Up to 25 Million Americans (GAO/HEHS-95-257, Sept. 19, 1995). Health Insurance Regulation: National Portability Standards Would Facilitate Changing Health Plans (GAO/HEHS-95-205, July 18, 1995). The Employee Retirement Income Security Act of 1974: Issues, Trends, and Challenges for Employer-Sponsored Health Plans (GAO/HEHS-95-167, June 21, 1995). Health Insurance Regulation: Variation in Recent State Small Employer Health Insurance Reforms (GAO/HEHS-95-161FS, June 12, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the implementation of the private insurance market provisions of the Health Insurance Portability and Accountability Act of 1996 (HIPAA). GAO noted that: (1) although HIPAA gives people losing coverage a guarantee of access to coverage in the individual market, consumers attempting to exercise this right have been hindered in some states by carrier practices and pricing and by their own misunderstanding of this complex law; (2) in the 13 states using the federal fallback approach to guaranteed access, some carriers initially discouraged people from applying for the coverage or charge them as much as 140 to 600 percent of the standard rate; (3) many consumers also do not fully understand the eligibility criteria that apply and as a result may risk losing their right to coverage; (4) issuers of health coverage believe certain HIPAA provisions are burdensome to administer, may create unintended consequences, or may be abused by consumers; (5) issuers also fear that HIPAA's guaranteed renewal provision could cause those eligible for Medicare to pay for redundant coverage and hinder carriers' ability to sell products to children and other targeted populations; (6) certain protections for group plan enrollees may create an opportunity for consumer abuse, such as the guarantees of credit for prior coverage, which could give certain enrollees an incentive, when they need medical care, to switch from low-cost, high-deductible coverage to more expensive, low-deductible coverage; (7) state insurance regulators have encountered difficulties implementing and enforcing HIPAA provisions where federal guidance lacks sufficient clarity or detail; (8) federal regulators face an unexpectedly large role under HIPAA, which could strain the Department of Health and Human Service's (HHS) resources and weaken its oversight; (9) in states that do not pass legislation implementing HIPAA provisions, HHS is required to take on the regulatory role; (10) as federal agencies issue more guidance and states and issuers gain more experience with HIPAA, concerns about the clarity of its regulations may diminish; (11) whether unintended consequences will occur is as yet unknown, in part because sufficient evidence has not accumulated; (12) in federal fallback states, premiums for group-to-individual guaranteed access coverage are likely to remain high unless regulations with more explicit risk-spreading requirements are issued at the federal or state level; (13) HHS' ability to meet its growing oversight role may prove inadequate given the current level of resources, particularly if more states cede regulatory authority to the federal government; and (14) in any case, as early challenges are resolved during 1998, other challenges to implementing HIPAA may emerge.
Although VA and DOD have shared resources at some level since the 1980s, the FHCC is the first integrated health care center with a unified governance structure, workforce, and budget.In fiscal year 2015, the FHCC provided care to about 100,000 patients at a total cost of $474 million. The Executive Agreement, signed by the Secretaries of VA, DOD, and the Navy, defines the departments’ sharing relationship at the FHCC and contains key provisions to be met in 12 integration areas. (See table 1 for the key provisions in the Executive Agreement.) According to the governance structure established in the Executive Agreement, the FHCC is accountable to both VA and DOD, with VA serving as the lead department. The FHCC director, a VA executive, is accountable to VA for the fulfillment of the FHCC mission, while the deputy director, a Navy Captain who rotates approximately every 2 years, is accountable to the Navy and, ultimately, DOD. Also in accordance with the Executive Agreement, staff from the Naval Health Clinic Great Lakes and the North Chicago VA Medical Center merged to create a single, joint workforce. This included the transfer of DOD civilian staff employed by the Department of the Navy to VA’s personnel system. As of November 2016, the FHCC's workforce included approximately 3482 civilian, active duty, and contract staff. Civilians comprised 69 percent (about 2396) of the facility's overall workforce, while 26 percent (about 907) were active duty servicemembers, and 5 percent (about 179) were contract staff. The NDAA 2010 established the Joint DOD-VA Medical Facility Demonstration Fund (Joint Fund) as the funding mechanism for the FHCC, with VA and DOD both making transfers to the Joint Fund from their respective appropriations. As authorized in the NDAA 2010, the Executive Agreement requires a financial reconciliation process that permits VA and DOD to identify their contributions to the Joint Fund each year. These contribution proportions are determined based on the proportion of shared care provided by each department, as well as the amount each department spent for mission-specific services provided to its beneficiaries. VA and DOD’s approach for evaluating the FHCC involved both separate and joint reviews that included the identification of recommended improvements in their report to Congress. However, the report did not include time frames for implementing these improvements. Additionally, although the departments acknowledged the “very high” costs of operating the FHCC, there was no updated cost-effectiveness analysis included that would provide a baseline for measuring efficiency. VA and DOD’s approach for evaluating the FHCC included conducting both separate and joint reviews to determine whether it should continue operating as an integrated facility with a unified governance structure, workforce, and budget or revert to a “joint venture.” Under a joint venture arrangement, the departments would continue sharing medical facility space, but would manage their operations with separate governance structures, staff, and budgets. VA and DOD initially conducted separate reviews of the FHCC with their own subject matter teams. VA established 9 subject matter teams that began their reviews in August 2015, and DOD established 11 subject matter teams that began their reviews in June 2015. Officials told us that the issues selected for review by the subject matter teams were based on the functional areas of the FHCC, the Executive Agreement, and requirements in the Duncan Hunter National Defense Authorization Act for Fiscal Year 2009 (NDAA 2009) that provided guidelines for establishing the demonstration. According to officials, each team reviewed the following documents: the FHCC evaluation conducted by VA and DOD’s contractor, the FHCC IT evaluation conducted by the Veterans Health Administration Product Effectiveness group, and other relevant reports, including reviews by GAO and the Institute of Medicine, as well as the mission and purpose of the facility. Based on their assessments, each team was asked to recommend whether the FHCC should continue as an integrated facility or revert to a joint venture. While the majority of VA’s teams recommended that the FHCC should continue operating as an integrated facility, the DOD/Navy teams did not have an overall consensus. (See table 2.) According to a Navy official, the teams’ recommendations were prioritized based on DOD’s determination of the importance of their particular area. Specifically, recommendations of the governance and budget teams were given a higher priority than the other subject matter teams. As a result, their recommendations to continue operating the FHCC as an integrated facility had more weight in DOD’s final determination. VA and DOD officials met jointly in October 2015 to determine the future of the FHCC. They reviewed the work of the subject matter teams, including the teams’ recommendations related to whether the FHCC should continue operating as an integrated facility as well as specific improvements the teams recommended implementing if the FHCC continued to operate as an integrated facility. They also studied the implications of either operating the FHCC as an integrated facility or converting it to a joint venture, and concluded that the latter was not advisable or achievable for two main reasons: The former Naval Hospital Great Lakes had been demolished, and funding for the replacement facility was used to expand the former North Chicago VA Medical Center as part of the demonstration. Returning all or some of the 470 civilian employees from VA to DOD’s personnel system would require complex negotiations and could result in job reclassifications and salary changes. As a result, the departments jointly recommended continuing the FHCC as an integrated facility with periodic reviews and the implementation of 17 recommended improvements that had been identified by the subject matter teams (See table 3.) Although the departments’ report to Congress outlined a number of recommended improvements for the FHCC as part of their decision to continue operating it as an integrated facility, the report did not include time frames for implementing them. VA and DOD officials have been routinely tracking each of the recommended improvements through meetings held twice monthly, and have developed a spreadsheet that includes information on status and next steps. However, officials have not identified time frames as part of their routine tracking efforts. As we have previously reported, leading practices for organizational planning call for results-oriented organizations to develop comprehensive plans that provide tools to ensure accountability, among other things. Although officials have defined goals and identified activities for implementing the recommended improvements, the lack of time frames and interim milestones suggests they do not have all of the tools needed to ensure accountability. Time frames and interim milestones could be used to monitor progress, hold staff accountable for achieving desired results, and make mid-course corrections, if needed. DOD officials acknowledged that although a majority of the recommended improvements do not have this information, the timing for implementing some improvements is outside their control, such as approval and funding for IT enhancements. (See recommended improvement 14 in table 3.) Additionally, according to these officials, the recommendation to conduct an extensive review and revision of the FHCC Executive Agreement and associated executive decision memoranda to reduce redundancies will be a monumental undertaking, and until this review is under way, officials will not know how much time will be needed to complete these efforts. (See recommended improvement 2 in table 3.) Furthermore, DOD officials informed us that two of the recommended improvements do have time frames, although this is not reflected in the tracking spreadsheet. Specifically, DOD officials stated that the joint staffing study has a completion goal of February 2017, and the proposal for future funding for the FHCC is due to be presented at the April 2017 Advisory Board meeting. (See recommended improvements 6 and 10, respectively, in table 3.) Both VA and DOD officials told us that they believe their current tracking efforts of the recommended improvements are sufficient. However, without time frames and interim milestones for most of the recommended improvements, VA and DOD officials are unable to ensure that these improvements will be implemented in a timely and efficient manner. In the letter that accompanied the report to Congress, both departments acknowledged that the costs associated with the demonstration project were “very high” and not in keeping with the initial goal of delivering more cost-effective health care. The letter further noted that the increased costs were due, in part, to the departments’ inability to appropriately downsize staff, as well as efforts to integrate their separate information systems. VA and DOD officials informed us that their statement about the high costs of the FHCC was based on the FHCC evaluation conducted by their contractor, Knowesis, which was referenced as an appendix in their report to Congress. Specifically, the contractor found that integration was not more cost-effective than a joint venture and that the FHCC was not consistently performing as well as the separate VA and Navy facilities were before integration. The contractor’s analyses of the FHCC’s cost-effectiveness used cost data that ended in fiscal year 2014. Since that time, the FHCC has had a change in leadership and has made additional improvements that VA and DOD officials believe would positively impact cost-savings. Consequently, VA and DOD officials informed us that they considered asking the contractor to update its analyses, but ultimately decided against it due to time constraints and the need to enter into a new contract as the prior one had expired. Officials also noted that although the FHCC’s costs had decreased, another analysis with one additional year of data would likely not have changed the contractor’s conclusions or recommendations. In addition, VA and DOD officials stated that they did not have sufficient time to conduct their own analysis with updated cost data to include in the report to Congress after receiving the contractor’s final report in September 2015. Instead, officials told us they discussed the increase in costs that would occur if the integrated facility was converted into a joint venture, which would result in the establishment of duplicative services that would be less efficient than the current arrangement. For example, officials said that the facility would need to have two infection control programs and two credentialing programs that would have to be staffed accordingly, resulting in additional costs. According to OMB’s capital programming guide, at many key decision points, a cost-benefit or cost-effectiveness analysis of operations would be useful to help make decisions. Additionally, based on our prior work on evaluating physical infrastructure and management consolidation initiatives, the goals and likely costs and benefits of a consolidation are key questions to consider. Without an updated cost-effectiveness analysis, VA and DOD do not know the extent to which they are achieving their initial goal of delivering more cost-effective health care. Such an analysis would provide a baseline from which to measure and track the FHCC’s future efficiency, including the effect of the recommended improvements, once implemented. It may also help facilitate the identification of any additional improvements and inform other future efforts to integrate VA and DOD facilities. VA and DOD’s recommendation to continue operating the FHCC as an integrated facility acknowledged the shortcomings and high costs of the demonstration and recommended not initiating similar efforts until they are able to “get it right.” However, despite the departments’ recommended improvements to overcome these shortcomings, deficiencies in monitoring and accountability may impede their ability to improve future operations and ensure cost efficiency. Specifically, the lack of time frames and interim milestones limits the departments’ efforts to ensure the timely and efficient implementation of their recommended improvements. Additionally, without an updated cost-effectiveness analysis, the departments lack the necessary information to know to what extent they are achieving their original goal of more cost-effective care, as well as whether their recommended improvements are contributing to this goal. Until these deficiencies are addressed, the departments cannot assure whether they will actually “get it right” at the FHCC, and whether this integrated model of care could or should be replicated in the future. We recommend that the Secretaries of Veterans Affairs and Defense collaborate to take the following actions: develop time frames and interim milestones for tracking and implementing each of their jointly developed recommended improvements; and conduct a cost-effectiveness analysis for the FHCC to establish a baseline for measuring the facility’s efficiency over time. VA and DOD each provided written comments on a draft of this report. In their comments, both departments concurred with our recommendations. In VA’s written comments, reproduced in appendix II, VA provided additional information related to implementing each of our recommendations. Specifically, VA stated that the Veterans Health Administration would work jointly with DOD to develop time frames and milestones for the recommended improvements with a target completion date of April 2017. VA also stated that FHCC officials are working with both departments to define a methodology to conduct a cost- effectiveness analysis using existing FHCC data. Once a methodology has been defined, VA stated that FHCC officials will work with both departments to complete the analysis with a target completion date of June 2018. DOD’s written comments, reproduced in appendix III, did not provide any additional information about implementing our recommendations. DOD also provided technical comments that we incorporated, as appropriate. We are sending copies of this report to the Secretary of Defense, Secretary of Veterans Affairs, and appropriate congressional committees. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at draperd@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix IV. The Captain James A. Lovell Federal Health Care Center’s (FHCC) Executive Agreement defines the sharing relationship and roles of the Department of Veterans Affairs (VA) and Department of Defense (DOD) and contains key provisions to be met in 12 integration areas. In 2011 and 2012, we reported on the implementation status of the FHCC’s Executive Agreement integration areas and made a number of recommendations. Additionally, in 2016, we reported on the ongoing difficulties that continued at the FHCC and made additional recommendations. See table 4 for our previous recommendations and the status of their implementation. In addition to the contact named above, Bonnie Anderson, Assistant Director; Danielle Bernstein, Analyst-in-Charge; Jennie Apter; and Linda Galib made key contributions to this report. Also contributing were Jacquelyn Hamilton and David Wise.
The National Defense Authorization Act for Fiscal Year 2010 (NDAA 2010) authorized VA and DOD to establish a 5-year demonstration to integrate their medical facilities in North Chicago, Ill. The NDAA 2010 also required VA and DOD to submit a report of their evaluation of the demonstration and their recommendation as to whether it should continue operating as a fully integrated facility after 5 years. In July 2016, VA and DOD submitted a report to Congress recommending that the FHCC continue operating as an integrated facility. The NDAA 2015 included a provision for GAO to assess VA and DOD's evaluation to Congress. In this report, GAO assesses VA and DOD's approach for evaluating the FHCC and making the determination to continue operating it as an integrated facility. To do this, GAO reviewed the report to Congress and relevant supporting documents, and interviewed officials about the evaluation. In analyzing the evaluation, GAO used as criteria its prior work on planning practices, evaluating physical infrastructure, and management consolidation initiatives, as well as the Office of Management and Budget's (OMB) capital programming guide. The Department of Veterans Affairs (VA) and the Department of Defense's (DOD) evaluation to determine whether the Captain James A. Lovell Federal Health Care Center (FHCC) should continue operating as an integrated facility or revert to a “joint venture” included conducting both separate and joint reviews. As an integrated facility, the FHCC has a unified governance structure, workforce, and budget. As a joint venture, the departments would continue sharing medical facility space, but would manage their operations with separate governance structures, workforces, and budgets. VA and DOD's joint review team concluded that converting the FHCC to a joint venture was not advisable or achievable because the Navy hospital had been demolished and money to replace it was used to expand the VA facility. In addition, returning the civilian employees from VA's to DOD's personnel system would require complex negotiations that could result in job reclassifications and salary changes. As a result, officials recommended continuing the FHCC as an integrated facility with the implementation of specific recommended improvements with the caveat that no similar integration efforts be undertaken until they “get it right” at the FHCC. In the report to Congress, VA and DOD outlined 17 recommended improvements for the FHCC but did not include time frames for implementing them. As GAO has previously reported, leading practices for planning call for results-oriented organizations to develop plans that provide tools to assure accountability, such as time frames and interim milestones that could be used to monitor progress, hold staff accountable for achieving desired results, and make mid-course corrections, if needed. Although officials routinely track each improvement through twice monthly meetings, and use a spreadsheet to monitor status and next steps, they have not specified time frames and interim milestones. Without this information, officials cannot ensure that they will implement the recommended improvements in a timely and efficient manner. The letter that accompanied the report to Congress stated that the FHCC's costs were “very high” and not in keeping with the initial goal of delivering more cost-effective health care. VA and DOD officials told GAO that this statement was based on their contractor's evaluation of the facility, which found that the FHCC was not more cost-effective than a joint venture. Officials told GAO that their contractor's analyses used cost data that ended in fiscal year 2014, and since that time, the FHCC has made improvements they believe would positively impact cost savings. However, officials said that they did not have sufficient time for the contractor to update the analysis after receiving the contractor's report in September 2015, and that one additional year of data would not likely have changed their conclusions or recommendations. According to OMB's capital programming guide, at many key decision points, a cost-effectiveness analysis of operations would be useful to help make decisions. Without an updated cost-effectiveness analysis for the FHCC, officials will not have a baseline from which to measure and track the FHCC's future efficiency, including the effect of the recommended improvements, once implemented. GAO recommends that the Secretaries of VA and DOD collaborate to establish time frames and interim milestones for tracking the implementation of the jointly recommended improvements and to conduct a cost-effectiveness analysis for the FHCC. VA and DOD concurred with GAO's recommendations.
The Clean Water Act prohibits the discharge of oil into or upon navigable waters or adjoining shorelines and requires the President to establish regulations to prevent oil spills. The President subsequently delegated this responsibility to EPA. To fulfill this requirement, in 1973, EPA issued its Oil Pollution Prevention Regulation, which outlined actions regulated facilities must take to prevent, prepare for, and respond to oil spills before they reach navigable waters or adjoining shorelines. Under this rule, as amended through 2006, EPA seeks to prevent oil spills from storage tanks at facilities that (1) have an aggregate aboveground storage tank capacity of more than 1,320 gallons or a total completely buried storage capacity greater than 42,000 gallons and (2) could reasonably be expected, due to their location, to discharge oil in quantities that may be harmful into or upon the navigable waters of the United States or onto adjoining shorelines. EPA estimated that about 571,000 facilities were regulated under the SPCC rule as of 2005. Oil production facilities (an estimated 166,000 facilities or 29 percent of the total) and farms (an estimated 152,000 facilities or 27 percent of the total) account for the largest portion of these estimated facilities. The SPCC rule does not require facilities that are covered under the rule to report to EPA that they are covered. Therefore, the agency does not have an inventory of facilities that it regulates under the program. However, facilities are required to report discharges of oil in quantities that may be harmful to navigable waters or adjoining shorelines to the National Response Center (NRC), but EPA does not consider these and other data reliable enough for EPA to determine the number of facilities subject to the SPCC rule that have had oil spills. The SPCC rule is a cornerstone of EPA’s strategy to prevent oil spills from reaching the nation’s waters. The regulation requires each owner or operator of a regulated onshore or offshore facility to prepare or amend and implement an SPCC plan that describes the facility’s design, operation, and maintenance procedures established to prevent spills from occurring, as well as countermeasures to control, contain, clean up, and mitigate the effects of an oil spill that could reach navigable waters or adjoining shorelines. Unlike oil spill contingency plans that typically address spill cleanup measures after a spill to navigable waters or adjoining shorelines has occurred, SPCC plans ensure that facilities put in place containment and other measures—such as regular visual inspection and integrity testing of bulk storage containers—to prevent oil spills that could reach navigable waters or adjoining shorelines. EPA’s 10 regional offices administer an inspection program to ensure compliance with the regulations. EPA proposed revisions to the SPCC rule in October 1991 and February 1993. In addition to clarifying previous regulatory language, these proposed revisions outlined additional requirements for regulated facilities. In December 1997, EPA proposed additional amendments to the SPCC requirements, focusing on measures to reduce the information collection burden on affected facilities. Many, but not all, of the amendments to the rule proposed by EPA in 1991, 1993, and 1997, were made final in July 2002. EPA made over 100 amendments to the rule in 2002, including more than 30 that EPA considers to be major. Several of these amendments changed the scope of the rule’s applicability. For example, the 2002 amendments exempted from the rule containers with a capacity of less than 55 gallons, completely buried storage tanks subject to all of the technical requirements of underground storage tank regulations, permanently closed oil tanks as defined in the regulation, and any facility or part thereof used exclusively for wastewater treatment; and eliminated the provision triggering the requirement for an SPCC plan when any single container has a capacity of greater than 660 gallons but maintained the 1,320-gallon total capacity threshold. The 2002 amendments also added to or changed the language of some definitions in the 1973 rule in order, according to EPA, to clarify which facilities are subject to the rule and facilities’ responsibilities under the rule. For example, according to EPA, the 2002 amendments clarified the following: A “facility” may be as small as a piece of equipment—for example, a tank—or as large as a military base; “oil” includes not only petroleum oil, but such other products as animal fats, vegetable oils, and oil mixed with wastes, other than “dredged spoil”; and what “navigable waters” means for purposes of the rule. The SPCC rule applies to facilities that “use” oil, such as in the operational use of oil-filled equipment. EPA had always considered statements in the existing (1973) SPCC regulations that a facility “should” implement a specific rule provision as meaning that a facility was required to comply with that provision or, if circumstances warranted, undertake alternative methods to achieve environmental protection. As a result, EPA changed “should” to “must” to reflect this understanding and address any confusion that compliance with such provisions was optional. According to EPA, the agency made several of these definitional changes to clarify the types of facilities that are included under the rule and facilities’ requirements. However, many industry sectors consider several of these amendments to be changes to the requirements of the rule rather than clarifications and, in some cases, maintain that they had not previously considered themselves subject to the rule prior to these changes. (A summary of industries’ views on the impacts that these and other amendments to the SPCC rule have had or are likely to have on the regulated community, and our analysis of these views, are included in apps. II and III, respectively.) Several of the rule’s amendments also changed requirements for preparing, implementing, reviewing, and amending SPCC plans. For example, the 2002 amendments to the rule decreased from once every 3 years to once every 5 years, the frequency with which a facility’s SPCC plan must be reviewed; required that the plan include a diagram of the facility, and that completely buried storage tanks located on the facility—otherwise exempt from SPCC rules—be included on the facility diagram; and gave EPA regional administrators the authority to require that any facility within their jurisdiction amend the SPCC plan after on-site review of the plan and extend the period of time for facilities already in operation to amend or complete their plans. Other amendments to the rule in 2002 changed facility requirements regarding the use and testing of containers, piping, and other equipment to prevent or mitigate the effects of oil spills from containers. For example, the 2002 amendments amended the integrity testing requirements for aboveground containers and required brittle fracture evaluation of field-constructed aboveground containers that may have a risk of discharge; added specificity to the description of secondary containment requirements, such as detailing that the containment system, including walls and floors, must be capable of containing oil and constructed so that any discharge from the primary containment system is prevented from escaping before cleanup occurs; and required a facility to conduct periodic integrity testing of containers and piping, in addition to the other requirements—i.e., contingency planning and a written commitment of resources—when the owner/operator determines and clearly explains that the installation of specific secondary containment structures or equipment is not practicable. In December 2006, EPA again made several changes to the SPCC rule, including several major amendments to provide additional burden relief to the regulated industries on specific rule provisions. For example, the scope of the rule’s applicability was changed, potentially reducing the number of facilities under the rule, by excluding motive power containers from the rule’s requirements. In addition, the 2006 amendments also changed requirements for preparing SPCC plans by providing an option for “qualified facilities” to prepare a self-certified SPCC plan instead of one that is reviewed and certified by a professional engineer. The 2006 amendments also decreased some secondary containment requirements to reduce the burden for facilities. For example, the 2006 amendments exempted facilities from having to construct and meet requirements for specific sized secondary containment for mobile refuelers; and allowed facilities to use alternatives to general secondary containment requirements for qualified oil-filled operational equipment, such as preparing an oil spill contingency plan and a written commitment of resources to control and remove discharged oil, and requiring an inspection or monitoring program. Although changes to the rule were finalized in 2002 and 2006, EPA extended the date of compliance in 2003, 2004, 2006, and 2007. Currently, owners and operators of facilities in existence on or before August 16, 2002, must continue to maintain their SPCC plans, and then must amend them to ensure compliance with current requirements, and implement the amended plan no later than July 1, 2009. Facilities beginning operations after August 16, 2002, must prepare and implement a plan by July 1, 2009. EPA made this latest extension to, among other things, allow owners and operators of facilities the time to fully understand the 2002 and 2006 amendments and the further revisions to the rule EPA plans to make in 2008 and to make changes to their facilities and SPCC plans. EPA determined that the 2002 and 2006 amendments constituted significant regulatory actions under Executive Order 12866. For significant regulatory actions, Executive Order 12866 requires agencies to assess the benefits and costs of, and reasonably feasible alternatives to, the planned regulatory action. In response, EPA conducted an economic analysis to provide estimates of the potential costs and benefits of the amendments. In addition, the agency conducted economic analyses of the 2006 amendments, both as proposed in 2005 and as made final in December 2006. EPA’s Office of Solid Waste and Emergency Response conducted these analyses. EPA’s economic analysis of the 2002 SPCC amendments had a number of limitations that reduced its usefulness for assessing the economic trade- offs associated with the amendments. Specifically, EPA’s 2002 analysis was limited because it did not (1) assess the uncertainty associated with key data and assumptions, such as the degree to which facilities were already in compliance with the amendments, (2) analyze the effect of regulatory alternatives to the amendments, (3) provide the compliance costs that EPA expected facilities to incur or save as a result of the amendments in comparable present value terms, and (4) estimate the effect of the amendments on the risk of an oil spill and on public health and welfare and the environment. These limitations raise questions about the reasonableness of the estimates and limit their usefulness for informing decision makers, stakeholders, and the public about the potential effects of the 2002 amendments. EPA estimated the compliance costs or cost savings to the regulated community of complying with the 2002 SPCC amendments using the following methodology: First, EPA established a baseline for the analysis, which it defines as a projection of regulated facility behavior in the absence of new regulatory provisions. For the purposes of its analysis, EPA assumed that the baseline represented full compliance by regulated facilities with the existing (1973) regulation, as well as industry behavior, practices, or standards that exceed the existing regulation. After establishing the baseline, EPA classified each regulatory revision or amendment into one of five categories: baseline, cost increase, negligible increase, cost savings, or negligible savings. Second, EPA estimated the total number of potentially affected facilities covered by the regulation to account for differences in the total potential costs for different sizes of facilities. Because estimating the economic effects of the amendments first required information on the size of the regulated community, EPA used a 1995 survey that it had conducted to determine the estimated number and size of production and storage facilities in most regulated industry sectors. Third, EPA estimated the costs of compliance for each regulated facility (that is, hours multiplied by the wage rate) for certain amendments, varying costs for each facility by its size. EPA developed costs for each facility for amendments considered to have cost increases or cost savings that were not negligible. Finally, EPA estimated the annual total compliance costs (or cost savings) associated with the amendments by multiplying the estimated costs per facility by the estimated number of affected facilities, taking into account whether the facility was small, medium, or large. EPA then aggregated the first-year and subsequent-year costs or savings incurred by all facilities. On the basis of this methodology, EPA estimated the costs that facilities will incur by implementing the 2002 amendments. As shown in table 1, EPA estimated that facilities will incur costs the first year and then save costs in the following years. EPA’s estimates of the economic impacts of the 2002 SPCC amendments are based on assumptions and data that are subject to uncertainty. In conducting its analysis of the amendments, however, EPA did not evaluate these uncertainties, as OMB guidelines advise. For example, EPA did not consider the uncertainties relating to its assumptions about facilities’ compliance with the existing 1973 SPCC rule and the potential impacts of revisions that were intended to clarify what types of facilities are subject to the rule. According to EPA, many of the 2002 SPCC amendments are either clarifications or editorial in nature, or they do not represent a substantive change in the existing regulatory requirements. In assessing the economic impacts associated with these amendments, EPA maintained that the clarifications were making explicit provisions or requirements that were already implicit in the existing SPCC rule, rather than introducing new ones. Therefore, in its analysis, EPA assumed that all regulated facilities were in full compliance with these existing provisions and would not incur any additional compliance costs as a result of the amendments. In addition, to the extent that regulated facilities were not in compliance with the provisions being clarified, EPA assumed that any cost they would incur to comply should be attributed in its analysis to the baseline and not to the 2002 amendments. However, the extent to which facilities were in compliance—or would be in compliance in the future in the absence of the amendments—is highly uncertain. As a result, EPA’s cost estimates do not fully reflect the potential impacts of the amendments. If, contrary to EPA’s assumption, facilities were not previously in compliance with the clarified provisions, but are brought into compliance by the 2002 amendments, the estimated costs (or cost savings) that should be attributed to the 2002 amendments would be higher (or lower), all else remaining the same. For example, in commenting to EPA and OMB on the proposed 2002 amendments, a representative of the electric utility industry stated that, until EPA clarified in the 2002 amendments that “users” of oil are subject to the rule, the electric utility industry did not believe that the SPCC rules applied to electrical equipment. Because of EPA’s clarification, however, facilities in this industry found that they were subject to the rule and EPA would consider them to have been out of compliance. As a result, the representative stated, the clarification would cause that industry to incur substantial costs to modify its facilities to meet the requirements of the amendments, such as installing secondary containment. EPA’s economic analysis stated that it was possible that some facilities misinterpreted the existing regulation and were not in full compliance with it, but there was no practical way to measure industry compliance. OMB guidelines indicate, however, that agencies can use uncertainty analysis to assess the effect of multiple baselines with different assumptions about the degree of compliance, particularly when industry compliance with existing regulations is uncertain and when different assumptions about compliance could substantially affect the estimated benefits and costs. Without such an analysis, EPA excluded the potential impact of current industry practice from its assessment of the total costs and benefits associated with the 2002 amendments, thus potentially misstating these amounts. In addition, EPA did not account for the uncertainty associated with its estimates of the number of facilities affected by the amendments. Because these estimates were subject to sampling error, EPA may not have accurately presented the number of facilities subject to the amendments. For example, for its estimates, EPA used a 1995 survey, which was based on a statistical sample of facilities in the 48 contiguous states. On the basis of this survey and subsequent adjustments agency officials made using their professional judgment, EPA estimated that 51,398 facilities would no longer be subject to the requirements of the SPCC rule as a result of the 2002 amendments. However, like estimates from all statistical samples, EPA’s estimates are subject to sampling error, which is the imprecision that results from surveying a sample of facilities rather than surveying every facility in the country. In its 2002 analysis, EPA acknowledged the sampling error, stating that its estimates of the number of facilities were accurate within plus or minus 10 percent. However, EPA did not account for this sampling error when estimating the costs associated with the amendments. OMB guidelines direct that the agencies ensure that their estimates reflect the full probability distribution of potential results. Consequently, to account for the imprecision in the estimated facilities and costs, it would have been appropriate for EPA to analyze the uncertainty associated with these estimates. OMB guidelines direct agencies to consider the most important alternative approaches to some or all of a rule’s provisions and provide their reasons for selecting the preferred regulatory action over such alternatives. However, EPA’s 2002 analysis did not assess alternatives to the amendments, such as alternative levels of stringency or alternative lead times to comply. To provide decision makers and the public with information on how the costs and benefits might vary depending on the regulatory approach, it would have been appropriate for EPA to assess the effect of alternatives in its analysis of the 2002 amendments. Without information on the benefits and costs of alternative regulatory actions, it is difficult to confirm that EPA’s preferred regulatory approach maximizes net benefits. Moreover, OMB guidelines state that agencies should discount costs and benefits that accrue in different time periods to present values. As depicted in table 1, EPA did not present the total cost estimate (costs incurred minus cost savings) of the amendments in comparable, net present value terms. Instead, EPA estimated the costs that would be incurred in the first year that the rule is in effect and the cost savings that facilities would achieve in the second and subsequent years. EPA officials stated that the present value of estimated costs is not significantly different from the cost estimates in the simple analysis it conducted absent the discounting. Nonetheless, since EPA estimated costs incurred and cost savings in the first year and each subsequent year over the life of the amendments, it would have been appropriate for EPA to present the total net costs in comparable present value terms. To compute present value, the agencies are directed to discount the estimated benefits and costs using interest rates recommended by OMB. Finally, OMB guidelines direct agencies to quantify and monetize the benefits (including the benefits of risk reductions) associated with the regulatory action, whenever possible. Moreover, when benefits are difficult to monetize, the OMB guidelines state that acceptable quantitative estimates of benefits and costs are preferable to qualitative descriptions. In cases where quantification is difficult, the guidelines direct the agencies to present any relevant quantitative information and describe the unquantifiable effects. In its analysis of the 2002 amendments, however, EPA did not monetize or quantify the potential benefits expected to result from any of the amendments. In addition, EPA’s qualitative discussion of the potential beneficial aspects of the 2002 amendments was very limited. For example, the agency discussed the general risk of an oil spill and the general damage that might be caused to public health and welfare and the environment. EPA stated that it assumed that the amendments would have minimal effects on the risks of a spill, lessen the burden to the regulated community, and maintain the existing level of protection to public health and welfare and the environment. Nonetheless, some of the 2002 amendments are more stringent than the existing SPCC rule, possibly reducing the risk of an oil spill, while other amendments are less stringent (that is, burden reducing), possibly increasing the risk of an oil spill. Without more substantive information on the potential effect of the amendments on the risk of an oil spill and the resulting effect on public health and welfare and the environment, it is difficult to confirm that the benefits of the amendments exceed their costs, as EPA concluded. EPA’s economic analysis of the 2006 amendments to the SPCC rule addressed several of the limitations in the agency’s 2002 analysis. However, the 2006 analysis also had some limitations that made it less useful than it could have been for assessing the economic trade-offs associated with the amendments. As shown in table 2, EPA estimated the compliance cost savings that would be generated by the 2006 amendments under (1) a baseline assuming full compliance with the existing SPCC rule including the 2002 amendments, (2) an alternative baseline assuming only 50 percent compliance with the existing SPCC rule including the 2002 amendments, and (3) different assumptions about the number of facilities that would be affected by the 2006 amendments. Under the alternative baseline, compliance cost savings would be roughly half as much as under the full compliance baseline because owners and operators of facilities that are not currently in compliance will not save costs as a result of the changes for burden reduction. In addition, because EPA did not have data on the precise number of facilities that would be affected by the amendments, EPA assessed the uncertainty associated with its estimates using arbitrarily developed scenarios for three of the major components of the rule. Based on this approach, EPA assumed that various percentages of the facilities would be affected by the regulatory changes in the rule. For example, for facilities with qualified oil-filled operational equipment, EPA analyzed the cost savings under different assumptions about the number of facilities that would be affected by the rule, ranging from 25 percent to 75 percent of the total number. Moreover, unlike its 2002 analysis, EPA’s 2006 analysis also analyzed and discussed some regulatory alternatives. For example, for the version of these amendments that were proposed in 2005, EPA proposed an exemption on the oil-filled operational equipment requirement for facilities that had no reportable discharges from their equipment within the prior 10 years of the date of their SPCC plan certification. Partly in response to comments on the proposed rule, EPA narrowed the restriction in the 2006 final rule to owners and operators that have not had a discharge exceeding 1,000 gallons or two discharges exceeding 42 gallons within a 12-month period in the 3 years prior to SPCC plan certification. Oil spills that are the result of natural disasters are not subject to these limitations. In its economic analysis of the 2006 final rule, EPA discussed the differences between the cost estimates for the restriction proposed in 2005 and the estimates for the restriction adopted in 2006. EPA estimated that the final rule cost savings would be greater under certain conditions (that is, if 75 percent of facilities are affected by the amendment), than estimated in the proposed version. Despite the improvements over its 2002 analysis, EPA’s analysis of the 2006 amendments also had some limitations that made it less useful than it could have been for assessing the economic trade-offs associated with the amendments. For example, EPA did not quantify or monetize the potential impacts of the 2006 amendments on the risk of an oil spill and on public health and welfare and the environment. Instead, EPA provided only a very limited qualitative discussion of the general risk of an oil spill and the general potential damages that it might cause. EPA reported that the reduced compliance costs will translate to net social benefits, but that these benefits might be partially offset by the potential increase in the risk of an oil spill (because of the less stringent requirements of the 2006 amendments compared with the existing requirements). EPA also stated that quantifying net benefits (benefits minus costs) associated with the 2006 amendments was not possible due to unknown future impacts of the rule, but it concluded that cost savings resulting from the amendments will not be offset by any significant losses in environmental protection. Nonetheless, it is difficult to affirm EPA’s conclusion without more substantive information on the potential effect of the amendments on the risk of an oil spill and the resulting effect on public health and welfare and the environment. In addition, because EPA’s estimates of the number of facilities that would be affected by the 2006 amendments were not based on nationally representative samples, the results may not be accurate. In particular, for the one amendment that would reduce the burden for certain SPCC- regulated facilities, EPA based its estimates of the number of facilities that would be affected by this amendment on data drawn from eight states: Florida, Kansas, Maryland, Minnesota, New York, Oklahoma, Virginia, and Wisconsin. Because facilities in these states may not have been representative of facilities nationwide, EPA’s use of these data in its analysis could have introduced bias into its estimates of the number of facilities and costs for this amendment. Furthermore, EPA excluded from its analysis more than half of the facilities in these eight states because the industrial category for these facilities could not be determined and could not be matched to an additional database. By not including such a high proportion of facilities on a nonrandom basis, additional error was likely introduced into EPA’s estimates of the number of SPCC-regulated facilities. It is, therefore, unclear whether the facilities that EPA included in the analysis are even representative of the universe of facilities within these eight states. EPA acknowledged these limitations in its analysis and stated that the analysis provided the best possible results given time and resource constraints. However, the actual number of U.S. facilities, and hence the resulting cost impacts, could be greater or less than EPA estimated. Overall, EPA reported that its analysis did not fully comply with OMB guidelines for conducting economic analyses of significant regulatory actions. It is difficult to confirm, however, that the regulatory changes are economically justified, as EPA concluded, without an estimate of both the costs and benefits associated with the amendments. Because both the 2002 and 2006 amendments to the SPCC rule are significant regulatory actions, it is important for EPA to have a credible economic basis for selecting these as the agency’s preferred regulatory actions. However, although EPA’s 2006 analysis improved upon its 2002 analysis, both analyses had limitations that may make it difficult for decision makers, stakeholders, and the public to verify that the agency has fully analyzed the economic impacts of its regulatory actions. Specifically, because EPA did not analyze key uncertainties in its analysis of the 2002 amendments, including the degree to which facilities were in compliance with some of the revisions, the reliability of the estimated costs and cost savings is questionable. In addition, EPA did not assess regulatory alternatives in its analysis for the 2002 amendments, making it difficult to confirm that EPA’s preferred regulatory approach is economically superior to other possible approaches. Moreover, because EPA did not estimate the impact of the amendments on the potential risk of an oil spill and on public health and welfare and the environment for either the 2002 or the 2006 amendments, EPA’s economic analyses may not provide decision makers, stakeholders, and the public with a sufficient basis for concluding that the benefits of the amendments outweigh their costs, as EPA did. Although we recognize that evaluating regulatory impacts is a complex task, unless EPA conducts more thorough economic analyses consistent with OMB guidelines, decision makers, stakeholders, and the public may lack assurance that the agency has fully evaluated the economic trade-offs of its regulatory actions. To improve the usefulness of the agency’s economic analysis for informing decision makers and the public, we recommend that the Administrator, EPA, take action to ensure that the agency’s economic analysis of future changes to the SPCC rule includes all of the key elements for such analyses contained in OMB’s guidelines for complying with Executive Order 12866. GAO provided EPA with a draft of this report for its review and comment. The agency stated that it generally agreed with the recommendation in the report to improve the agency’s economic analyses for future changes to the SPCC rule, consistent with OMB guidelines, and has undertaken several initiatives to improve its analyses. EPA noted that, consistent with our recommendation, the agency has (1) activated a core SPCC Economic Subgroup of economic and technical experts; (2) acquired additional expert contractor support; and (3) hired an experienced senior economist to guide these efforts, and plans to continue gathering additional data to improve its understanding of the regulated universe and oil spill risks, and to address uncertainty and quantify benefits. In addition, EPA commented that the agency believes that the economic analyses that it conducted for the 2002 and 2006 amendments to the SPCC rule are already consistent with, and meet the spirit and intent of, OMB guidelines, given the limited data, time, and resources available. However, because both the 2002 and 2006 amendments to the SPCC rule were significant regulatory actions potentially affecting thousands of facilities across a wide range of industries, it is important for EPA to have a credible economic basis for selecting its preferred regulatory actions. In particular, we found that EPA’s analyses were generally not consistent with OMB guidelines in some key areas, including accounting for the extent to which facilities were in compliance with the existing 1973 rule and in assessing the impact of the amendments on the risk of an oil spill and public health and the environment. Decision makers, stakeholders, and the public may lack assurance that the agency has fully evaluated the economic trade-offs of its regulatory actions without more thorough economic analyses consistent with OMB guidelines. Finally, EPA commented that it does not agree with GAO’s characterization that the agency’s sensitivity analysis of the 2006 amendments used “arbitrarily developed scenarios” for three of the major components affected by the rule. However, in its economic analysis of the 2006 amendments, EPA stated that it “arbitrarily developed three scenarios” to estimate the number of facilities that might be affected by these components. Furthermore, we did not comment on EPA’s use of these scenarios because, according to the agency, data on the number of facilities that might be affected by the rule were not available. EPA also provided technical comments on the draft report, which we have incorporated as appropriate. The full text of EPA’s comments is included as appendix IV. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 20 days from the report date. At that time, we will send copies to the Administrator of EPA and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-3841 or stephensonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. We reviewed the reasonableness of the economic analyses that the Environmental Protection Agency (EPA) used in support of the 2002 and 2006 Spill Prevention, Control, and Countermeasure (SPCC) amendments. To determine the reasonableness of EPA’s economic analyses, we assessed EPA’s May 2002 Economic Analysis for the Final Revisions to the Oil Pollution Prevention Regulation (40 CFR Part 112), November 2005 Regulatory Analysis for the Proposed Revisions to the Oil Pollution Prevention Regulation (40 CFR Part 112), and November 2006 Regulatory Impact Analysis for the Final Revisions to the Oil Pollution Prevention Regulations (40 CFR Part 112). As criteria for evaluating the reasonableness of the economic analyses, we used guidelines for federal agencies in assessing regulatory impacts that the Office of Management and Budget (OMB) developed under Executive Order 12866, including its Economic Analysis of Federal Regulations Under Executive Order 12966; Guidelines to Standardize Measures of Costs and Benefits and the Format of Accounting Statements; and Circular A-4. We also reviewed the Unfunded Mandates Reform Act of 1995. In addition, we discussed EPA’s analyses with senior officials in EPA’s Office of Emergency Management, Regulation, and Policy Development Division, which was responsible for conducting the analyses. We also spoke with officials representing major industry associations about their views on EPA’s economic analyses and discussed any analysis they may have prepared regarding the SPCC amendments. Furthermore, we reviewed other documents related to the rule changes. We also obtained stakeholders’ views on any impacts that they believe the SPCC amendments will have on either the regulated community or on the risk of oil spills by administering a survey to key industry associations and environmental groups, respectively, regarding 43 key SPCC amendments. A summary of responses to survey questions appears in appendix II, and our analysis of the results of the survey appears in appendix III. To administer our survey, we selected a nonprobablity sample of 30 SPCC stakeholders, including 28 industry associations and two environmental groups. These organizations were either (1) members of EPA’s SPCC stakeholder group, which was involved with the agency in discussions and periodic meetings before the rule amendments were made final, or (2) national organizations that submitted comments to EPA regarding proposed SPCC rule changes more than once in 1991, 1993, 1997, or 2002. The vast majority of comments were received from associations and businesses representing the major industry sectors—such as oil and natural gas products, petroleum refining, transportation, manufacturing, electric utilities, and food and agriculture—most likely to be regulated under SPCC. Only a few environmental associations submitted comments. Results from this nonprobability sample cannot be used to make inferences about all industry or environmental associations because not all associations representing those affected by the SPCC rule had a chance of being selected as part of the sample. Our questionnaire asked stakeholders what impact they believe will result from each of 43 major amendments to the SPCC rule. We selected these amendments by reviewing the major changes EPA made to the SPCC rule in 2002 and 2006. Our questionnaire provided summaries of each of these amendments, which, in most instances, were derived from EPA’s descriptions in the Federal Register. In some cases, we developed our summaries by reviewing the descriptions of the amendments in the rules, and reviewing comments on the amendments submitted to EPA by both industry and environmental groups. Of the 43 amendments selected, we included 29 amendments finalized in 2002 that EPA listed as major amendments in the Federal Register. In addition, we included six amendments from 2006 that EPA described in the Federal Register and several agency fact sheets as major amendments to the rule. The remaining eight amendments we included in our survey—six from 2002 and two from 2006—were frequently mentioned in industry comments that we reviewed. We asked respondents to assess the impact of each of these amendments on a five-point scale which ranged from “very negative impact” to “very positive impact.” We asked industry associations to assess the impact on their industry and environmental groups to assess the impact on the risk of oil spills. We also asked respondents to list the five amendments that would have the greatest positive impact and the five amendments that would have the greatest negative impact. However, we did not receive a sufficient number of responses to these questions and so did not include them in our analysis. The practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, respondents may have difficulty in interpreting a particular question or may lack information necessary to provide valid and reliable responses. In order to minimize these errors, we conducted pretests of the draft questionnaire with two industry associations by telephone. During these pretests, we checked whether (1) questions were clear and unambiguous, (2) terminology was used correctly, (3) the questionnaire did not place undue burden on respondents, (4) the information could feasibly be obtained, and (5) the survey was comprehensive and unbiased. In addition, the survey was peer reviewed by a GAO senior survey methodologist. We made changes to the content and the format of the questionnaire after each of the pretests based on the feedback we received. In order to succinctly summarize responses to our survey, we performed a content analysis in which we grouped each of the 43 SPCC amendments into major categories. We first reviewed the summary of each of the amendments that we included in our questionnaire and inductively identified common groups. We then developed criteria to define which amendments would be included in each group. To ensure that this process was reliable, each amendment was independently categorized by three GAO analysts, and categorization decisions among the three analysts were compared. All initial disagreements regarding categorization decisions were discussed and reconciled by refining the criteria used to categorize the amendments. In a few cases, we were unable to determine the category into which to place an amendment based solely on the description of that amendment used in our survey. In these cases, we reviewed the complete description of the amendment in the Federal Register to determine the appropriate category. To see the exact wording of the final rule, please refer to the Federal Register. We categorized each of the 43 amendments along two dimensions. The first dimension relates to the actions that regulated facilities are required to take. The categories within this dimension that we identified during our content analysis include the following: (1) requirements to develop an SPCC plan or to notify officials of oil spills; (2) changes to the scope of those facilities to which the rule applies; (3) requirements for containers and piping used by SPCC facilities; (4) requirements to test or inspect containers, piping, and other equipment; (5) requirements regarding training of SPCC facility employees; and (6) amendments that fit into more than one of the above categories or did not fit into one of the above categories. The second dimension relates to whether the amendment increases or decreases requirements on facilities. We made this determination based on whether the amendment uses terms such as “adds new requirements” and “mandates,” which would be considered an increase in requirements, or terms such as “allows” or “exempts,” which would be considered a decrease in requirements. In some instances, we determined that an amendment does not imply either an increase or a decrease in requirements, or that an amendment included provisions that would both increase and decrease requirements. In these instances we categorized the amendment as having a “mixed” direction. In some instances we could not determine if the amendments increased or decreased requirements and, therefore, did not categorize the amendment along the second dimension. By categorizing each amendment in terms of both of these dimensions— the facility actions to which the amendment applies and whether the amendment increases or decreases requirements on facilities—we identified 11 total categories of amendments. For example, we developed a category for amendments that increased requirements on planning and notification and another category for amendments that decreased requirements on the scope. Some combinations of categories in these two dimensions contained no amendments. For example, we did not identify any amendments that decreased requirements on inspections and testing. For a detailed description of our coding rules and specific amendments that we placed in each of these categories, please see appendix III. We calculated a score to summarize the industry stakeholders’ views of the impact they believe each type of SPCC amendment will have on their industries. We collapsed the five-point response options in our survey into “very positive impact” and “somewhat positive impact” categories from the survey into one and removed the “no answer/no basis to judge” responses. We then calculated the average of the responses from all of the industry associations to questions regarding all of the amendments within a particular category and developed a score, ranging from -1.0 (entirely negative impact), to 0.0 (no impact), to 1.0 (entirely positive impact), for each of the categories of amendments. An entirely positive impact would indicate that every industry stakeholder reported that every amendment of a given type would have a positive impact on their industry. Similarly, an entirely negative impact would indicate that every industry stakeholder reported that every amendment of a given type would have a negative impact on their industry. No impact would indicate that either (1) every industry stakeholder reported that every amendment of a given type would have no impact on their industry, or (2) an equal number of responses reported a positive impact as reported a negative impact for all amendments of a given type. Using these three anchor points, we considered scores between -1.0 and -0.5 to be mostly negative, scores between -0.5 and 0.0 to be somewhat negative, scores between 0.0 and 0.5 to be somewhat positive, and scores between 0.5 and 1.0 to be mostly positive. Computer analysis programs were independently verified by a senior statistician. We also verified the accuracy of the underlying survey data keypunched by comparing them with their corresponding questionnaires and found that there were no errors. Our analysis is limited to the perceived impact of the amendments on industry. We did not receive sufficient responses from environmental groups to do a thorough analysis of the perceived impact of the amendments to the SPCC rule on protecting human health and the environment. We performed our work from June 2006 to July 2007 in accordance with generally accepted government auditing standards. The following tables present a summary of our survey of 23 stakeholders to obtain their views on the impacts that the amendments to the SPCC rule have had or are likely to have on the regulated community. These stakeholders included the major associations representing industry that had submitted comments to EPA on the proposed rule changes and that EPA had also identified as key stakeholders. We also followed up with officials from several industry associations to clarify some of their survey responses. What impact does your association believe each of the following 2006 amendments to the SPCC rule will have on your industry? (We asked survey recipients to check one box per amendment.) § 112.3 Requirement to prepare and implement a Spill Prevention, Control, and Countermeasure Plans § 112.3(a)(2), § 112.3(b)(2): delays the compliance dates for farms until the effective date of a rule establishing SPCC requirements specifically for farms or dates that farms must comply with the provisions of this part. 7 § 112.6 Qualified Facility Plan Requirements § 112.7 General requirements for Spill Prevention, Control, and Countermeasure Plans § 112.8 Spill Prevention, Control, and Countermeasure Plan requirements for onshore facilities (excluding production facilities) § 112.12 Spill Prevention, Control, and Countermeasure Plan requirements § 112.8 (c)(2), § 112.8 (c)(11), § 112.12 (c)(2), § 112.12 (c)(11): provides an exception for mobile refuelers from constructing and meeting requirements for secondary containment. What impact does your association believe each of the following 2002 amendments to the SPCC rule will have on your industry? (We asked survey recipients to check one box per amendment.) § 112.1(b): adds “users” of oil as a group subject to the rule and expands the jurisdiction of the rule as amended in the Clean Water Act. § 112.1(d)(2)(i): does not count the capacity of completely buried tanks (defined in parts 280 or 281) or permanently closed tanks towards the threshold. § 112.1(d)(2)(ii): eliminates the aboveground storage capacity threshold of greater than 660 gallons for a single container but maintains the greater than 1320 threshold and establishes a “de minimis” container capacity size of 55 gallons or greater to calculate capacity. § 112.1(d)(4): requires completely buried storage tanks, otherwise exempt, to be included on the facility diagram. § 112.1(d)(5), (6): exempts containers that are 55 gallons or less; exempts facilities (or parts thereof) used exclusively for wastewater treatment unless it is used to meet part 112 requirements. § 112.1(f): gives the EPA Regional Administrators authority to require an SPCC plan for any facility within the jurisdiction in order to meet goals of the CWA. 3 § 112.2: adds new definitions, such as for ‘facility’, and expands the definition of ‘oil’, ‘discharge’, ‘navigable waters’, ‘offshore facility’, and ‘United States’. § 112.3 Requirement to prepare and implement Spill Prevention, Control, and Countermeasure Plan § 112.3(a),(b): requires facilities in operation to prepare or revise an SPCC Plan within six months and implement the plan within another six months; new facilities must prepare and implement an SPCC Plan before beginning operations. § 112.3(d): requires the professional engineer (PE) attestation to include that the PE considered applicable industry standards and certified that the Plan is in accordance with SPCC requirements; also allows an agent to examine a facility in place of the PE, but the PE must review the agent’s work, and certify the SPCC Plan. § 112.3(e): requires a copy of the SPCC Plan to be maintained at a facility attended for at least 4 hours a day instead of the current requirement of 8 hours. 1 § 112.4 Amendment of Spill Prevention, Control, and Countermeasure Plan by Regional Administrator § 112.4(a): raises the threshold for reporting two discharges to greater than 42 U.S. gallons (1 barrel) per discharge, but reduces the amount of information to be submitted to the RA. § 112.4(b): does not require facilities to meet any requirements of this section (§ 112.4) until the new compliance deadlines to prepare an SPCC Plan (specified in section § 112.3). § 112.4(c): changes the requirement from notification to the State agency in charge of water pollution control activities to notification to the State agency in charge of oil pollution control activities. § 112.4(d): provides that the RA may require a Plan amendment after an on-site review of the Plan. § 112.5 Amendment of Spill Prevention, Control, and Countermeasure Plan by owners or operators § 112.5(a), (b): requires any amendment made under this section be prepared within six months and implemented in no more than six months from when the amendment was made. § 112.5(b): changes the period of review for SPCC Plans from 3 to 5 years, and requires documentation of completion of the review and evaluation. § 112.5(c): clarifies that a PE must certify only technical amendments, and not non- technical amendments (ex. names, phone numbers). 0 § 112.7 General requirements for Spill Prevention, Control, and Countermeasure Plans § 112.7: allows differing formats for the Plan; other formats must be cross- referenced to the listed SPCC requirements and include all applicable SPCC requirements. § 112.7(a)(3): requires a description and a diagram of the facility layout in the SPCC Plan. § 112.7(a)(4): requires facilities to provide additional information and procedures for reporting a discharge; facility response plan (FRP) facilities (defined in § 112.20) are exempt. § 112.7(a)(5): requires facilities to organize the Plan in a readily usable format for an emergency; facility response plan (FRP) facilities (defined in § 112.20) are exempt. § 112.7(c): requires a containment system to be capable of containing oil and constructed to prevent any discharge from escaping from the facility and reaching navigable waters and adjoining shorelines. 0 § 112.7(d): adds new requirements for periodic integrity testing of containers, and periodic integrity and leak testing of valves and piping; exempts FRP facilities (as defined by section §112.20) from having a contingency plan. § 112.7(e): allows use of usual and customary business records to serve as a record of tests or inspections and records to be kept separate from the Plan; acknowledges the certifying engineer as having a role developing inspection procedures. § 112.7(f): mandates training for oil-handling employees only, and specifies training topics; also requires discharge prevention briefings at least once a year. § 112.7(i): specifies a brittle fracture requirement for field- constructed containers undergoing repairs, alteration, reconstruction or change in service that may affect the risk of discharge. § 112.8 Spill Prevention, Control, and Countermeasure Plan requirements for onshore facilities (excluding production facilities) § 112.8(c)(3), § 112.9(b)(1): allows National Pollutant Discharge Elimination Systems (NPDES) records to be used for SPCC purposes in lieu of events records specifically prepared for this purpose. § 112.8(c)(6): requires integrity testing on aboveground containers on a regular schedule, and when material repairs are done; testing can be recorded using usual and customary business records. 1 § 112.8(d)(1): requires buried piping installed or replaced to have protective wrapping and coating and cathodic protection or otherwise satisfy the corrosion protection provisions for underground piping (40 CFR part 280 or 281). § 112.9 Spill Prevention, Control, and Countermeasure Plan requirements for onshore oil production facilities § 112.9(c)(2): clarifies that secondary containment include sufficient freeboard to contain precipitation. § 112.11 Spill Prevention, Control, and Countermeasure Plan requirements for offshore oil drilling, production, or workover facilities § 112.11(i): requires offshore oil drilling, production or workover facilities to simulate discharges for testing and inspecting pollution control and countermeasure systems. Subpart C—Requirements for Animal Fats and Oils and Greases, and Fish and Marine Mammal Oils; and for Vegetable Oils, including Oils from Seeds, Nuts, Fruits, and Kernels § 112.12 - § 112.15: adds sections to apply to Animal Fats and Vegetable Oils based on the Edible Oil Regulatory Reform Act (EORRA) requirements. Requirements are identical to Subpart B for petroleum and non-petroleum oils. Our stakeholder survey also allowed respondents the opportunity to elaborate on their opinions of the SPCC amendments. Table 3 below presents some illustrative examples of the open-ended comments that we received from 22 of the 23 industry survey respondents. The examples include respondents’ opinions on the SPCC amendments that they consider to have the most positive or negative impact on their industry sectors. These comments provide the current opinions of the industry associations we surveyed, but they do not necessarily represent the views of the regulated community as a whole. In addition, these comments do not represent the views of EPA or GAO. Our analysis of the results of our survey of 23 key industry stakeholders regarding 43 major SPCC amendments indicates that they generally view increases in SPCC requirements as having a negative impact on their industries and decreases as having a positive impact. However, their views on the extent of the anticipated impacts varied widely depending on the type of requirement. Overall, industry stakeholders responded that the 2006 amendments would have a positive impact on their industries and that the 2002 amendments would have a combination of both positive and negative impacts. We identified five categories of amendments that increase SPCC requirements. Of these five categories, we found that industry stakeholders view two as having a mostly negative impact on their industry, two as having a somewhat negative impact, and one as having a somewhat positive impact. In addition, we identified four categories of amendments that decrease SPCC requirements. Of these four types, we found that industry stakeholders view three as having a mostly positive impact on their industry and one as having a somewhat positive impact. Finally, we identified one category of amendments that both increase and decrease requirements and another category of amendments for which we could not determine whether the amendments either increase or decrease the requirements. We found that industry stakeholders view both of these categories as having a somewhat negative impact. We found that industry stakeholders anticipate a mostly negative impact from amendments that (1) increased requirements on testing, such as integrity testing of storage tanks; and (2) increased requirements on containment, such as secondary containment requirements. By contrast, these stakeholders anticipate a mostly positive impact from amendments that decrease requirements on containment, facility oil spill prevention plans or notification procedures, and what we categorize as multiple SPCC requirements. Finally, industry stakeholders indicated that six amendment categories will have a somewhat negative or somewhat positive impact on their industries compared with the other amendments. Figure 1 summarizes these views. We received responses to our survey from only one environmental stakeholder and, therefore, we were unable to comprehensively analyze the views of environmental groups. The following is a detailed description of the coding rules used and the 11 categories into which we placed the 2002 and 2006 SPCC amendments. We summarize the major rule amendments finalized in 2002 and 2006; to see the exact wording of the finalized rule, please refer to the regulation as published in the Federal Register. We determined whether the amendment increases or decreases requirements on facilities based on whether the amendment uses terms such as “adds new requirements” and “mandates,” which would be considered an increase in requirements, or terms such as “allows” or “exempts,” which would be considered a decrease in requirements. In some instances, we determined that an amendment does not imply either an increase or a decrease in requirements, or that an amendment included provisions that would both increase and decrease requirements. In addition, there were several instances where we could not determine if the amendment increased or decreased requirements. For example, several of these types of amendments made definitional changes to words used in the rule, but it was unclear from reviewing the text of the amendment whether these changes were a clarification to the rule or increased or decreased requirements. In general, amendments in this category are changes to the criteria for eligibility or changes to thresholds for oil storage. These amendments affect either the number of facilities subject to the SPCC rule or the number of oil tanks at a given facility subject to the SPCC rule. In particular, the written description of the amendment in our survey should include words such as increases, adds, eliminate, or exempts. We identified one of the 43 amendments as expanding the scope of the SPCC rule, and six as decreasing the scope of the SPCC rule. 2002 amendment that we categorized as expanding the scope of the rule: 112.1(f): gives the EPA Regional Administrators authority to require an SPCC plan for any facility within the region, otherwise exempt from the rule, in order to carry out the purposes of the Clean Water Act. 2002 amendments that we categorized as decreasing the scope of the rule: 112.1(d)(2)(i): excludes the capacity of completely buried tanks subject to all of the technical requirements of the underground storage tank regulations from calculation of the threshold, and states that permanently closed tanks also do not count in the calculation. 112.1(d)(2)(ii): eliminates the aboveground storage capacity threshold of greater than 660 gallons for a single container, but maintains the greater than 1,320 threshold and establishes a “de minimis” container capacity size of 55 gallons or greater to calculate capacity. 112.1(d)(4): exempts completely buried storage tanks that are subject to all of the technical requirements of the underground storage tank regulations from the rule requirements, but requires those tanks to be included on the facility diagram. 112.1(d)(5), (6): exempts containers that are less than 55 gallons; and facilities (or parts thereof) used exclusively for wastewater treatment unless it is used to meet part 112 requirements. 2006 amendments that we categorized as decreasing the scope of the rule: 112.1(d)(2)(ii), § 112.1(d)(7): excludes “motive power containers” (defined in § 112.2) from the rule, but does not exclude the transfer of fuel or other oil into a motive power container at an otherwise regulated facility. 112.3(a)(2), § 112.3(b)(2): delays the compliance dates for farms until the effective date of a rule establishing SPCC requirements specifically for farms or dates that farms must comply with the provisions of this part. In general, this category refers to requirements to prepare, implement, amend, or certify SPCC plans or other records or documents required of regulated facilities. The description of the amendment includes references to plans, records, diagrams, or any other documents that facilities are required to have under the SPCC rule. We identified 17 amendments from 2002 and 1 amendment from 2006 that fit this category. Of the 17 amendments from 2002, we categorized 5 amendments as increasing requirements on facility oil spill prevention plans or oil spill notification procedures, 9 as decreasing requirements, and 3 as either both increasing and decreasing requirements or neither increasing or decreasing requirements. The one amendment from 2006 decreased requirements. 2002 amendments that we categorize as increasing planning or notification requirements: 112.3(e): requires a copy of the SPCC plan to be maintained at a facility attended for at least 4 hours a day instead of the current requirement of 8 hours. 112.4(d): provides that the EPA Regional Administrator may require an amendment to the SPCC plan after an on-site review of the plan. 112.7(a)(3): requires a description and a diagram of the facility layout in the SPCC plan. 112.7(a)(4): requires facilities to provide additional information and procedures in the SPCC plan for reporting a discharge; facility response plan (FRP) facilities (defined in § 112.20) are exempt. 112.7(a)(5): requires facilities to organize the SPCC plan in a readily usable format for an emergency; FRP facilities (defined in § 112.20) are exempt. 2002 amendments that we categorize as decreasing planning or notification requirements: 112.3(f): allows the EPA Regional Administrator to grant an extension of time for amendments of the SPCC plan, as well as the entire SPCC plan. 112.4(a): raises the threshold for reporting under the program to two discharges of greater than 42 U.S. gallons (1 barrel) per discharge in any 12-month period, and reduces the amount of information to be submitted to the EPA Regional Administrator. 112.4(b): does not require new facilities to meet any requirements of this section (§ 112.4) until the compliance dates for the initial preparation and implementation of an SPCC plan. 112.5(a): requires any amendment made under this section be prepared within six months and implemented in no more than six months from when the amendment was prepared. 112.5(b): changes the period of review for SPCC plans from 3 to 5 years, and requires documentation of completion of the review and evaluation. 112.5(c): states that a professional engineer (PE) must certify only technical amendments, and not non-technical amendments (e.g. names, phone numbers). 112.7: allows differing formats for the SPCC plan; other formats must be cross-referenced to the listed SPCC requirements and include all applicable SPCC requirements. 112.7(e): allows use of usual and customary business records to serve as a record of tests or inspections and records to be kept separate from the SPCC plan; acknowledges the certifying engineer as having a role developing inspection procedures. 112.8(c)(3), § 112.9(b)(1): allows National Pollutant Discharge Elimination Systems (NPDES) records to be used for SPCC purposes in lieu of events records specifically prepared for this purpose. 2006 amendments that we categorize as decreasing planning or notification requirements: 112.6: allows “qualified facilities” (defined in § 112.3(g) to self-certify SPCC plans and provides applicable requirements for self- certification. 2002 amendments that we categorize as both increasing and decreasing the planning or notification requirements, or that neither increasing nor decreasing the requirements: 112.3(a),(b): requires facilities in operation to prepare or revise an SPCC plan within 6 months and implement the plan within one year; new facilities must prepare and implement an SPCC plan before beginning operations. 112.3(d): requires the PEs to attest that they considered applicable industry standards and that the SPCC plan is in accordance with SPCC requirements; also allows an agent to examine a facility in place of the PE, but the PE must review the agent’s work, and certify the SPCC plan. 112.4(c): changes the requirement from notification to the state agency in charge of water pollution control activities to notification to the state agency in charge of oil pollution control activities. In general, this category refers to requirements for containers or piping used by SPCC facilities. In particular, the amendment in our survey should use one or more of the following terms: container, containment, secondary containment, piping, or tanks to be included in this category. We identified one amendment from 2002 that increased requirements for containers or piping used by SPCC facilities and two amendments from 2006 that decreased the requirements. 2002 amendment that we categorized as increasing containment requirements: 112.8(d)(1): requires all buried piping installed or replaced on or after August 16, 2002, to have protective wrapping and coating and cathodic protection or otherwise satisfy the corrosion protection provisions for underground piping (40 C.F.R. pts. 280 or 281). 2006 amendments that we categorized as decreasing containment requirements: 112.7(k): allows owners/operators of qualified oil-filled operational equipment (defined in 112.7 (k)(1)) to meet alternate requirements (defined in 112.7(k)(2)) in lieu of the general secondary containment requirements. 112.8 (c)(2), § 112.8 (c)(11), § 112.12 (c)(2), § 112.12 (c)(11): provides an exception for mobile refuelers from constructing and meeting certain secondary containment requirements. In general, this category refers to requirements to evaluate, inspect, and test containers, piping, or equipment to prevent oil spills. In particular, the written description of the amendment in our survey should include one or more of the following terms: test, integrity test, or inspect. We identified five amendments from 2002 that fit this category. All five of these amendments were categorized as increasing SPCC requirements. 2002 amendments that we categorized as increasing testing requirements: 112.7(d): adds new requirements for periodic integrity testing of containers, and periodic integrity and leak testing of valves and piping when secondary containment is impracticable; exempts FRP facilities (as defined by section §112.20) from having a contingency plan when secondary containment is impracticable. 112.7(i): specifies a brittle fracture evaluation requirement for field- constructed containers undergoing repairs, alteration, reconstruction, or change in service that may affect the risk of discharge. 112.8(c)(6): requires integrity testing on aboveground containers on a regular schedule (as opposed to periodically), and when material repairs are done; testing can be recorded using usual and customary business records. 112.8(d)(4): requires integrity and leak testing of buried piping at the time of installation, construction, relocation, or replacement. 112.11(i): requires offshore oil drilling, production, or workover facilities to simulate discharges for testing and inspecting pollution control and countermeasure systems. This category refers to training of employees that facilities are required to undertake. Amendments placed into this category must include the key word “training.” We identified one amendment—from 2002—that fits this category. We categorized it as increasing requirements. 2002 amendment that we categorized as increasing requirements: 112.7(f): mandates training for oil-handling employees only, and specifies additional training topics; also requires discharge prevention briefings at least once a year. Amendments in this category either (1) do not fit into one of the above categories or (2) fit into more than one of the above categories. Two amendments—one each from 2002 and one from 2006—were categorized as decreasing requirements. In addition, seven amendments in this category did not fit into the above categories because we could not determine if the amendments increased or decreased requirements. 2002 amendment that we categorized as decreasing requirements: 112.7(a)(2): allows deviations from most of the rule’s substantive requirements (except secondary containment), provided that the reasons for nonconformance are explained, and equivalent environmental protection is provided. 2006 amendment that we categorized as decreasing requirements: 112.3(g): defines a qualified facility eligible to self-certify under the provisions set forth in § 112.6. 2002 amendments that we could not determine if they should be categorized as increasing or decreasing or neither increased or decreased requirements: 112.1(b): adds “using” to the lists of activities at facilities subject to the rule and expands the scope of the rule to conform to the expanded jurisdiction in the Clean Water Act. 112.2: adds new definitions, such as for “facility,” and discharge; revises the text of the definitions of “oil” and “navigable waters”; and includes statutory definitions for “offshore facility,” and “United States” in the rule. 112.7(c): states that a containment system must be capable of containing oil and constructed to prevent any discharge from escaping from the facility before cleanup occurs. 112.9(c)(2): states that secondary containment must include sufficient freeboard to contain precipitation. 112.12 - § 112.15: adds sections to differentiate requirements for Animal Fats and Vegetables Oils based on the Edible Oil Regulatory Reform Act (EORRA) requirements. Requirements are identical to Subpart B for petroleum and non-petroleum oils. 2006 amendments that we could not determine if they should be categorized as increasing or decreasing or neither increased or decreased requirements: 112.2: adds several definitions, including airport mobile refueler, farm, motive power container, and oil-filled operational equipment. 112.13 - § 112.15: removal of these sections because they are not appropriate for facilities that process, store, use, or transport animal fats and/or vegetable oils. In addition to the individual named above, Vincent P. Price, Assistant Director; Kevin Bray; Mark Braza; Greg Carroll; Jennifer DuBord; Timothy J. Guinane; Jennifer Huynh; Lisa Mirel; and Carol Herrnstadt Shulman made key contributions to this report.
Oil in aboveground tanks can leak into soil and nearby water, threatening human health and wildlife. To prevent certain oil spills, the Environmental Protection Agency (EPA) issued the Spill Prevention, Control, and Countermeasure (SPCC) rule in 1973. EPA estimated that, in 2005, about 571,000 facilities were regulated under this rule. When finalizing amendments to the rule in 2002 and 2006 to both strengthen the rule and reduce industry burden, EPA analyzed the amendments' potential impacts and concluded that the amendments were economically justified. As requested, GAO assessed the reasonableness of EPA's economic analyses of the 2002 and 2006 SPCC amendments, using Office of Management and Budget (OMB) guidelines for federal agencies in determining regulatory impacts, among other criteria, and discussed EPA's analyses with EPA officials. EPA's economic analysis of the 2002 SPCC amendments had several limitations that reduced its usefulness for assessing the amendments' benefits and costs. In particular, EPA did not include in its analysis a number of the elements recommended by OMB guidelines for assessing regulatory impacts. For example, EPA did not assess the uncertainty of key assumptions and data. In the analysis, EPA assumed that certain facilities were already complying with at least some of the rule's provisions and, as a result, they would not incur any additional compliance costs because of the amendments. However, the extent of facility compliance with the rule was highly uncertain. EPA did not analyze the effects of alternative rates of industry compliance on the estimated costs and benefits of the revised rule and, therefore, potentially misstated these amounts. Furthermore, EPA's 2002 analysis was limited in that it (1) did not analyze alternatives to the amendments, such as alternative lead times for industry to comply or alternative levels of stringency; (2) did not present the compliance costs that EPA expects facilities to incur or save in the second and subsequent years under the amendments in comparable present value terms (through discounting); and (3) provided only limited general information on the amendments' potential benefits in reducing the risk of an oil spill and its potential effects on human health and the environment. EPA's economic analysis of the 2006 amendments addressed several of the limitations of its 2002 analysis, but it also had some limitations that made it less useful than it could have been for assessing the amendments' costs and benefits. For example, EPA's 2006 analysis assessed the potential effect of industry noncompliance on the estimated costs (or cost savings) and estimated the present value of costs (or cost savings) associated with different alternatives for burden reduction. Nevertheless, as with the 2002 analysis, EPA did not estimate the potential benefits of the 2006 amendments, such as the extent to which they would affect the risk of an oil spill and public health and welfare and the environment. In addition, EPA did not have available nationally representative samples for its analysis; therefore, its estimates of the number of facilities that would be affected by the 2006 amendments may not be accurate. In particular, for one category of facilities, EPA based its estimates of the number of facilities on data available from eight states. Because facilities in these states may not have been representative of facilities nationwide, EPA's use of these data in its analysis could have introduced bias into its estimates of the number of facilities and costs for this amendment. EPA acknowledged that its analysis of the 2006 amendments was not a full accounting of all social benefits and costs but stated that the results were based on the best available information given time and resource constraints.
To determine the nature and purpose of TARP activities from March 27, 2009, through June 12, 2009, unless noted otherwise, and the status of actions taken in response to our recommendations from our March 2009 report, we reviewed documents from OFS that described the amounts, types, and terms of Treasury’s purchases of senior preferred stocks, subordinated debt, and warrants under the Capital Purchase Program (CPP). We also reviewed documentation and interviewed officials from OFS who were responsible for approving financial institutions to participate in CPP and overseeing the repurchase process for CPP preferred stock and warrants. Additionally, we contacted officials from the four federal banking regulators—the Federal Deposit Insurance Corporation (FDIC), the Office of the Comptroller of the Currency (OCC), the Board of Governors of the Federal Reserve System (Federal Reserve), and the Office of Thrift Supervision (OTS)—to obtain information on their process for reviewing CPP applications, the status of pending applications, their process for reviewing preferred stock and warrant repurchase requests, and their examination process for reviewing recipients’ lending activities and compliance with TARP requirements. To update the status of the Targeted Investment Program (TIP), the Systemically Significant Failing Institutions Program (SSFI), and the Automotive Industry Financing Program (AIFP), we reviewed relevant documents and interviewed OFS officials about these programs. We also met with Federal Reserve officials to discuss the stress test methodology and results for the 19 largest U.S. bank holding companies and reviewed related documents relevant to the Capital Assistance Program (CAP). To provide an update on the Federal Reserve’s Term Asset-Backed Securities Loan Facility (TALF) and its efforts related to small business securitizations—and in consideration of GAO’s statutory limitations on auditing certain functions of the Federal Reserve—we reviewed publicly available information on the Web sites of the Federal Reserve and the Federal Reserve Bank of New York that had been made available since our March 2009 report. We also interviewed officials in OFS for updates to TALF. For updates to Public Private Investment Program (PPIP) and small business efforts related to its Consumer and Business Lending Initiative, we reviewed agency documentation and interviewed Treasury and FDIC officials. For updates on the Small Business Administration (SBA) efforts related to improving credit and securitization markets for small businesses, we relied on previously issued GAO work. To determine Treasury’s progress in developing an overall communications strategy for TARP, we assessed Treasury’s activities based on GAO reports on effective communications. We also accessed www.financialstability.gov—Treasury’s new Web site for communication of TARP-related strategies—through June 4, 2009. Further, we interviewed officials from OFS and Treasury’s Office of Public Affairs to determine what steps Treasury had taken to coordinate communications with the public and Congress. To determine the status of OFS’s efforts to hire staff to administer TARP duties, we reviewed OFS’s organizational chart, documents on staff composition and workforce planning, Treasury’s most recent budget proposal submission to the Office of Management and Budget (OMB), and OFS vacancy announcements posted on www.financialstability.gov and www.USAjobs.gov from March 31, 2009, to June 8, 2009. We also reviewed our prior work on human capital flexibilities and strategic workforce planning to assess OFS’s performance in these areas. In addition, we met with a variety of Treasury and OFS officials to discuss the staffing levels of OFS offices including vacancies, their processes for recruiting employees with the skill sets and competencies needed to administer TARP, steps taken to find permanent replacements to fill key leadership positions, and the extent of pay comparability challenges. We also met with officials from the Office of Personnel Management to discuss their coordination with Treasury in establishing hiring flexibilities and other tools to staff OFS. To assess OFS’s process for vetting employees’ potential conflicts of interest, we reviewed information from Treasury’s databases used to track submission and reviews of Treasury employees’ confidential and public financial disclosure reports. Specifically, we reviewed information in the databases for 64 OFS employees hired as of April 23, 2009. Of these, 56 were permanent employees required to submit confidential financial disclosure reports and 8 were senior-level officials required to submit public disclosure reports. In order to determine the reliability of the information provided in the databases, we interviewed Treasury officials and performed basic tests on the data. We determined that the information provided for these 64 employees was sufficiently reliable for our purposes. We also reviewed standard operating procedures that Treasury developed to manage the submissions and reviews of its employees’ financial disclosure reports and new internal operating procedures developed specifically for reviewing OFS employees’ confidential financial disclosure reports. In coordination with GAO experts on federal ethics laws and regulations, we reviewed information provided by 15 senior-level OFS officials in public financial disclosure reports and identified any potential conflicts meriting additional discussion with Treasury ethics counsel. In addition, we met with Treasury and OFS officials to discuss their reviews of financial disclosure reports and the training provided to OFS staff on the laws and regulations pertaining to ethical conduct in the federal workplace, including those related to conflicts of interest. We met with officials from the Office of Government Ethics (OGE) to discuss pertinent ethics regulations that applied to Treasury and reviewed their guidance on ethical standards of conduct for employees. We also reviewed reports published by Treasury’s Office of the Inspector General describing conflicts of interest incidents and their resolution. To assess OFS’s use of contractors and financial agents to support TARP administration and operations for the period of March 14 through June 1, 2009, we reviewed information from Treasury for (1) new financial agency agreements, contracts, blanket purchase agreements, and interagency agreements; and (2) task orders, modifications, and amendments involving ongoing contracts and agreements. We analyzed this information, in part, to identify small or minority- and women-owned prime contractors and subcontractors providing TARP services and supplies. To report OFS expenses for contracts and agreements, we obtained information from the OFS Chief Financial Officer. To identify the extent to which federal banking regulators use contractors to support their TARP activities, we obtained information from FDIC, Federal Reserve, OCC, and OTS. To assess the status of OFS progress in developing a final TARP conflicts-of- interest rule and responding to our prior recommendations to (1) complete reviews of vendor conflicts-of-interest mitigation plans to conform with the interim rule and to (2) issue guidance requiring key communications and decisions be documented, we interviewed officials from Treasury and reviewed applicable documents. To assess the status of internal controls related to TARP activities and the status of TARP’s consideration of accounting and reporting topics, we reviewed documents provided by OFS and conducted interviews and made inquiries with officials from OFS, including the Chief Financial Officer, Deputy Chief Financial Officer, Deputy Chief Risk Officer, Cash Management Officer, Director of Internal Controls, and their representatives. To evaluate selected internal control activities related to the CPP, AIFP, and SSFI programs, we designed tests using OFS’s process flows, narratives, risk matrices, and high-level operational procedures. As part of our ongoing work, we completed the following additional activities: For CPP, we tested certain internal control activities related to dividend payments received through June 12, 2009, from institutions included in our previous sample of 45 unique preferred stock purchase transactions for the four months ended January 31, 2009. To make that selection, we used a monetary unit sampling (probability proportionate to size) methodology. We also tested dividends received through June 12, 2009, for TIP, Asset Guarantee Program (AGP), and AIFP. For SSFI, we tested selected control activities, including approvals, reviews, and closing documentation, for the American International Group Inc. (AIG) restructuring. The documentation that we reviewed included an exchange agreement and purchase agreement executed on April 17, 2009. For AIFP, we tested controls over the (1) authorization and execution of the initial General Motors Corporation (GM) and Chrysler LLC (Chrysler) agreements (executed on December 31, 2008, and January 2, 2009, respectively), (2) funding process, (3) receipt of promissory notes and securities, (4) disbursements made by Treasury under the agreements, and (5) receipts of interest and principal. In addition, we verified that the loan amounts disbursed to and interest received from GM and Chrysler were consistent with the terms of the agreements. Finally, in our initial report under the mandate, we identified a preliminary set of indicators on the state of credit and financial markets that might be suggestive of the performance and effectiveness of TARP. We consulted Treasury officials and other experts and analyzed available data sources and the academic literature. We selected a set of preliminary indicators that offered perspectives on different facets of credit and financial markets, including perceptions of risk, cost of credit, and flows of credit to businesses and consumers. We assessed the reliability of the data upon which the indicators were based and found that, despite certain limitations, they were sufficiently reliable for our purposes. To update the indicators in this report, we primarily used data from Thomson Datastream—a financial statistics database. As these data are widely used, we conducted only a limited review of the data but ensured that the trends we found were consistent with other research. We also relied on data from Inside Mortgage Finance, Treasury, the Federal Reserve, the Chicago Board Options Exchange, and Global Insight. We have relied on data from these sources for past reports and determined that, considered together, these auxiliary data were sufficiently reliable for the purpose of presenting and analyzing trends in financial markets. The data from Treasury’s survey of lending to the top 21 CPP recipients (as of March 31, 2009) are based on internal reporting from participating institutions, and the definitions of loan categories may vary across banks. Because the data are unique, we are not able to benchmark the origination levels against historical lending or seasonal patterns at these institutions. Based on discussions with Treasury and our review of the data, we found that the data were sufficiently reliable for the purpose of documenting trends in lending. The survey data will prove valuable for more thorough analyses of lending activity in future reports. We also conducted an econometric analysis to assess the impact of CPP on the TED spread. Although we used a standard and widely used methodology, the model results should be interpreted with caution because we did not attempt to capture all potential factors that might explain movements in the TED spread. Moreover, in spite of the empirical evidence, we cannot link improvements in the TED spread exclusively to CPP (see app. III for more detail). We conducted this performance audit from April 2009 through June 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Since its creation, OFS has implemented numerous programs and initiatives to carry out TARP. According to Treasury, the purpose of each program is as follows: CPP was created in October 2008 to stabilize the financial system by providing capital to viable banks through the purchase of preferred shares and subordinated debentures. In return for its investment, the Treasury will receive dividend payments and warrants. TIP was created in January 2009 to foster market stability and thereby strengthen the economy by making case-by-case investments in institutions that Treasury deems are critical to the functioning of the financial system. AGP was created in November 2008 to provide government assurances for assets held by financial institutions that are viewed as critical to the functioning of the nation’s financial system. SSFI was created in November 2008 to provide stability in financial markets and avoid disruptions to the markets from the failure of a systemically significant institution. Treasury determines participation in this program on a case-by-case basis. AIFP was created in December 2008 to prevent a significant disruption of the American automotive industry. Treasury has determined that such a disruption would pose a systemic risk to financial market stability and have a negative effect on the U.S. economy. The program requires participating institutions to implement plans that will achieve long-term viability. Auto Supplier Support Program was created in March 2009 to help stabilize the auto supply base, which designs and builds the components for cars and trucks. Making Home Affordable Program was created in March 2009 to offer assistance to as many as 7 to 9 million homeowners. The program aims to prevent the destructive impact of the housing crisis on families and communities. According to Treasury, it will not provide money to speculators, but will target support to the working homeowners who have made every possible effort to stay current on their mortgage payments. Consumer and Business Lending Initiative created in March 2009 is an initiative under the Financial Stability Plan that includes the Federal Reserve-run TALF. This initiative is intended to support consumer and business credit markets by providing financing to private investors to issue new securitizations to help unfreeze and lower interest rates for auto, student, and small business loans; credit cards; commercial mortgages; and other consumer and business credit. Subsequently, it subsumed the Small Business and Community Lending Initiative, which was also created in March 2009 to increase credit available to local businesses by reducing fees and increasing guarantees for SBA loans and having Treasury purchase securities backed by SBA loans. CAP was created in February 2009 to restore confidence throughout the financial system that the nation’s largest banking institutions have sufficient capital to cushion themselves against larger-than-expected future losses, and to support lending to creditworthy borrowers. PPIP was established in March 2009 to address the challenge of “legacy assets” as part of Treasury’s efforts to repair balance sheets throughout the financial system and increase the availability of credit to households and businesses. In conjunction with the FDIC, Treasury established the Legacy Loans Programs component of PPIP. Since our March 2009 report, a number of major TARP-related events have occurred (see fig. 1). As of June 12, 2009, Treasury projected that it had used $643.1 billion of its almost $700 billion limit for TARP. Highlights of the transactions and activities under the various programs include the following: CPP continues to be one of OFS’s most active programs with OFS continuing to deploy funds and other participants beginning to repay investments. While OFS has hired asset mangers, it has yet to clearly identify what role the asset managers will have in monitoring compliance. The Federal Reserve announced the results of the stress test under CAP, for which Treasury extended the deadline for applications through November 9, 2009. As of June 8, 2009, no applications had been submitted. The Federal Reserve announced a number of modifications to TALF and has completed a number of fundings since March 2009. OFS and FDIC took additional steps to implement the PPIP’s Legacy Loans Program, but postponed a previously planned pilot sale of assets by open banks. Treasury, in conjunction with the Federal Reserve and SBA, has also announced additional efforts to provide more accessible and affordable credit to small businesses. Citigroup, Inc. (Citigroup) expanded its request to convert preferred securities and trust preferred securities for common stock from $27.5 billion to $33 billion and finalized the exchange agreement on Jun 9, 2009, but the conversion had not been completed as of June 12, 2009. OFS finalized a $30 billion equity facility with AIG under SSFI and restructured AIG’s existing preferred stock from cumulative to noncumulative shares but did not require additional concessions from AIG counterparties. OFS provided an additional $44 billion in assistance to Chrysler and GM under AIFP. Finally, consistent with our recommendations, Treasury has continued to take steps to develop an integrated communication strategy for TARP, but we continue to identify areas that warrant ongoing attention and consideration. As of June 12, 2009, Treasury had disbursed about $330 billion in TARP funds, approximately $200 billion of them for CPP (table 1). Officers and employees of Treasury may not obligate or expend appropriated funds in excess of the amount apportioned by OMB on behalf of the President. Treasury stated that as of June 12, 2009, OMB had apportioned about $479.2 billion of the funding levels announced for TARP. Given this information, it appears that Treasury has not exceeded the troubled asset purchase limit or obligated funds in excess of those OMB has apportioned. We are continuing to obtain additional information from Treasury and review the controls that Treasury has in place to help ensure compliance with the funding restrictions. In addition, beginning in April 2009, the budgetary costs of TARP asset purchases, loans, and loan guarantees since the inception of the program represent the net present value of estimated cash flows to and from the government, excluding administrative costs. OFS is continuing to develop and enhance its methodology and documentation surroun estimated cash flows. We will review TARP’s estimated cash flows and resulting program costs as part of our ongoing work. From TARP’s inception through June 12, 2009, Treasury had received approximately $6.2 billion in dividend payments on shares of preferred stock acquired through CPP, TIP, AIFP, and AGP (table 2). Treasury’s agreements under these programs entitled it to receive dividend payments on varying terms and at varying rates. The dividend payments to Treasury are contingent on each institution declaring dividends. From March 21, 2009, through June 12, 2009, 17 CPP participants had not declared or paid dividends of approximately $6.6 million. Specifically, 7 institutions did not declare and pay their cumulative dividends of approximately $6 million and 10 institutions did not declare and pay their noncumulative dividends of approximately $666,000. OFS said it received notification from the 17 institutions that they did not intend to declare or pay their May 15, 2009, quarterly dividends. According to OFS officials, of the 17 institutions, 13 informed Treasury that state or federal banking regulations or policies restricted them from declaring dividends, 1 indicated concern about its profitability, and 3 did not provide an explanation as to why they did not declare dividends. According to the standard terms of CPP, after six nonpayments by a CPP institution— whether or not consecutive—Treasury and other holders of preferred securities equivalent to Treasury’s can exercise their right to appoint two members to the board of directors for that institution at the institution’s first annual meeting of stockholders subsequent to the sixth nonpayment. Five of these participants were also among the original eight participants that did not declare or pay approximately $150,000 in noncumulative dividends as reported in our March 2009 report. Two of the eight paid their most recent dividend payments for the May 15, 2009, quarterly dividend payment date. The other participant subsequently declared and paid the approximately $14,000 in noncumulative dividends previously not paid and its most recent May 15, 2009, quarterly dividend. Treasury has continued to use CPP as a primary vehicle under TARP as it attempts to stabilize financial markets. As of June 12, 2009, Treasury had disbursed about 92 percent of the $218 billion (revised from the original $250 billion) it had allocated for the purchase of almost $199.5 billion in preferred shares and subordinated debt from 623 qualified financial institutions (table 3). These purchases ranged from about $301,000 to $25 billion per institution. As of June 12, 2009, about $712 million in preferred stock shares and subordinated debt from 91 financial institutions had been purchased since our March 2009 report. As of June 12, 2009, a variety of types of institutions had received CPP capital investments under TARP, including 278 publicly held institutions, 307 privately held institutions, 22 S-corporations, 16 community development financial institutions (CDFI), and no mutual institutions. These purchases represented investments in state-chartered and national banks and U.S. bank holding companies located in 48 states, the District of Columbia, and Puerto Rico. For a detailed listing of financial institutions that received CPP funds as of May 29, 2009, see GAO-09-707SP. Treasury and the federal regulators continued to review applications for CPP. According to Treasury, it has received over 1,300 CPP applications from the regulators as of June 12, 2009, fewer than 100 were awaiting decision by the Investment Committee. For many applications in this category, Treasury is awaiting updated information from the regulators before taking the application to the Investment Committee for a vote. The bank regulators also reported that they were reviewing applications from more than 220 institutions that had not yet been forwarded to Treasury. Qualified financial institutions generally have 30 calendar days after Treasury notifies them of preliminary approval for CPP funding to submit investment agreements and related documentation. OFS officials stated that about 400 financial institutions that received preliminary approval had withdrawn their CPP applications as of June 12, 2009. Many of these institutions withdrew their applications because of the uncertainty surrounding future program requirements. Some financial institutions have continued to raise concerns about the length of time it is taking the bank regulators and Treasury to process their CPP applications. Bank regulatory officials noted that many factors could affect the time it took to process a particular bank’s CPP application. For example, the necessary term sheet for a particular ownership structure might not have been available when the bank filed its application and the application could not be processed, the bank regulators’ interagency CPP Council needed to review the application, regulators needed to perform on-site visitations or conduct new bank examinations if the existing examination was dated, regulators needed to consider enforcement actions, or regulators had to request additional information (e.g., related to credit quality) from the bank before processing its application. Data provided by the bank regulators showed that, as of May 15, 2009, the average processing time for CPP applications—from the date the regulator received the institution’s application to the date it was forwarded to Treasury—varied from 28 days to 57 days depending on the regulator (table 4). OFS officials noted that some of the reasons for delays in the final processing of CPP applications once they had been received, were the need to obtain shareholder approval to issue preferred stock to Treasury, obtain executive compensation certification waivers, or schedule board of directors meetings. According to data provided by OFS, as of May 15, 2009, the average processing time from the receipt of CPP application package from the regulators to preliminary funding approval was about 12 days, and from preliminary funding approval to disbursement of funds was about 34 days. We are verifying this information as part of our ongoing review of the CPP process. The Treasury Secretary announced in a May 13, 2009, speech that Treasury had taken additional actions under CPP to ensure that small community banks and holding companies (qualifying financial institutions with total assets less than $500 million) would have the capital they needed to lend to creditworthy borrowers. Small banks now have until November 21, 2009, to apply to CPP under all term sheets. All current CPP participants that qualify as a small bank under these new program terms will be allowed to reapply and note on their applications that they are making a supplemental request for CPP funding. These applications will be evaluated via an expedited approval process that Treasury is currently working with the four primary federal banking regulators to establish. New CPP participants will continue to have their applications processed under the original CPP applications process. Treasury also increased the maximum amount of CPP funding a small financial institution may receive from the current 3 percent of risk-weighted assets to 5 percent of risk- weighted assets. The new deadline for small banks to apply to their regulator to form holding companies and apply for CPP funding is also November 21, 2009. On April 7, and 14, 2009, Treasury issued standardized term sheets for four types of mutual institutions: mutual holding companies with publicly held subsidiary holding companies, mutual holding companies with privately held subsidiary holding companies, top-tier mutual holding companies without subsidiary holding companies, and mutual banks or savings associations not controlled by holding companies. The terms for the four types of mutual institutions are generally similar to those for the corresponding publicly held institutions, privately held institutions and S-corporations, with some exceptions. The application deadline for mutual holding companies was May 7, 2009; for mutual banks or savings associations not controlled by holding companies the deadline was May 14, 2009. Like the terms for publicly held institutions, those for publicly held mutual subsidiary holding companies stipulate that the preferred shares pay dividends at a rate of 5 percent annually for the first 5 years and 9 percent annually thereafter; the shares are nonvoting, except with respect to protecting investors’ rights; a warrant must be issued for common stock with an aggregate value equal to 15 percent of the Treasury’s CPP investment; financial institutions may repurchase their shares at their face value; preferred stock will count as tier 1 regulatory capital; and Treasury generally may transfer the preferred shares to a third party at any time. In addition, the number of shares of common stock underlying the warrant held by Treasury will be reduced by 50 percent if the institution completes a qualified equity offering for 100 percent of the amount of the preferred stock during 2009. The terms for privately held subsidiary holding companies are generally similar, except for the warrant for preferred stock. For these companies, as for privately-held institutions, warrants for preferred stock may have an aggregate value equal to 5 percent of the Treasury’s CPP investment. Treasury intends to immediately exercise such warrants for warrant preferred shares with a 9 percent dividend rate. The terms for top-tier mutual holding companies without subsidiary holding companies and mutual banks or savings associations without holding companies are similar to those for S-corporations. Those terms are generally similar to those for publicly held institutions, with the exception that debt—senior notes—is issued instead of preferred stock. In addition, the senior notes count as tier 1 capital when held at the holding company level and tier 2 capital when held by a mutual bank or savings association. The senior notes pay interest at a rate of 7.7 percent annually for 5 years and 13.8 percent thereafter, and warrants for additional debt must equal 5 percent of the Treasury’s initial investment. Treasury exercises the warrants at the time of the initial capital investment. Holding companies may defer interest on the senior notes for up to 20 quarters, but any unpaid interest will accumulate and compound at the then-applicable interest rate in effect. In addition, these companies cannot pay dividends on shares of equity, mutual capital certificates, other capital instruments, or trust preferred securities as long as any interest is deferred. Treasury has indicated that, while the term sheets for privately held mutual institutions allow institutions to reduce the warrants held by Treasury if they complete a qualified equity offering during 2009, this provision was included in the term sheets in error. In each case, Treasury intends to exercise the warrants immediately and there is no need for the reduction provision. As permitted by the act—as amended by American Recovery and Reinvestment Act of 2009 (ARRA)—and the CPP agreements, participants may repurchase or buy back their preferred stock and warrants issued to Treasury under CPP at any time, subject to consultation with the primary federal banking regulator. However, the regulators have yet to disclose to Treasury or the public a generally consistent set of criteria that they are using to make decisions concerning repayment other than that they follow existing applicable supervisory procedures. According to Treasury officials, ARRA severely limits Treasury’s authority to decide whether banks may purchase their stock. After all the preferred shares are repurchased, the financial institution may repurchase all or part of the warrants held by Treasury. Under the original terms of CPP, financial institutions were prohibited from repurchasing within the first 3 years unless they completed a qualified equity offering. ARRA amended this requirement by allowing institutions to repurchase their shares with the approval of their primary federal regulator. See appendix IV for a description of the repurchase process. While Treasury has some information about the preferred stock repurchase process on the www.financialstability.gov Web site, the federal financial regulators have yet to disclose the specific criteria for approving repurchases for certain TARP recipients. In order to help ensure consistency, agencies are expected to develop adequate internal controls to ensure consistent decision making. Unless the Treasury, in consultation with the primary federal regulators, take steps to ensure that the regulators have and apply generally consistent criteria and clearly articulate the basis they have used or plan to use to approve or deny repurchase requests, Treasury will face an increased risk that TARP participants may not be treated equitably. As of June 12, 2009, 22 institutions had repurchased their preferred stock from Treasury for a total of about $1.9 billion (see table 5 for additional repurchase information). Also, as of June 12, 2009, 5 financial institutions had repurchased their warrants and 3 institutions had repurchased warrant preferred stock from Treasury at an aggregate cost of about $13.3 million. In addition, 3 financial institutions had informed Treasury that they did not plan to repurchase their warrants. For those institutions that informed Treasury that they did not intend to repurchase their warrants, Treasury may attempt to sell the warrants in the financial markets. According to a Treasury official, as of June 12, 2009, Treasury has not yet liquidated any CPP warrants in the financial markets. On June 9, 2009, Treasury announced that 10 of the largest U.S. financial institutions participating in CPP had met the requirements for repayment established by their primary federal regulator and that, following consultation with the regulators, Treasury had notified the institutions that they were eligible to complete the repurchase process. Collectively, the Treasury-held preferred shares in these 10 institutions have a liquidation preference of approximately $68 billion. Upon completion of the preferred stock repurchase process, each institution will have the right to repurchase the warrants held by Treasury. As mentioned previously, as of June 12, 2009, 5 institutions had repurchased their warrants from Treasury. We found that Treasury followed a consistent process in these instances; however, according to Treasury, there is no readily available market for the warrants that had been repurchased to date. The value of those warrants depends on the valuation process and the underlying assumptions. In one instance, Treasury received multiple offers from the institution to repurchase its warrants but rejected the first two offers. The final offer that Treasury accepted was slightly lower than Treasury’s own determination of the market value of the institution’s warrants but more than twice the initial offer and slightly more than its second. According to documents we reviewed, in accordance with its process for determining whether to accept an offer from the institution, Treasury considered 1) warrant price indications from certain market participants, 2) certain warrant pricing models, 3) a warrant price calculation from a third-party contractor, and 4) Treasury’s own financial analysis of the institution. According to Treasury, the final warrant price was deemed to be reasonable given that the institution’s stock price had declined during negotiations, reducing the warrant’s value and that Treasury’s market value determination for the warrant was based on a number of factors that involve judgment such as liquidity discounts. If Treasury and the issuing institutions cannot agree on a price, either can invoke an appraisal procedure whereby each chooses an independent appraiser to determine the estimated fair market value (FMV) and if the two cannot agree on a FMV, they will appoint a third appraiser. If an institution decides not to repurchase its warrants under the negotiation and appraisal procedure, Treasury may sell the warrants through an auction process—another mechanism that Treasury could use to sell shares—when it deems appropriate. Treasury describes the warrant repurchase process broadly on the www.financialstability.gov Web site. Additional details about the process are contained in the individual securities purchase agreements that are also posted on the Web site. Further, the final warrant prices are disclosed on the Web site. However, Treasury has provided limited information about the valuation process it has used to date. Specifically, it has not disclosed the details—such as the institution’s initial offer or how the final price compares to Treasury’s valuation. For less liquid securities, prices can vary widely depending on the assumptions underlying the valuation models leading some market observers to question whether Treasury had received a fair market value for the warrants that have been repurchased to date. By not being more transparent about the valuation process and the negotiations that were undertaken to establish the accepted warrant price, Treasury increases the likelihood that questions will remain about whether Treasury has best served taxpayers’ interests. Given the broad ranging risks inherent in TARP, Treasury must take steps to help ensure that its decisions are not only fair and equitable but also that they result in maximum value. Unless Treasury takes this type of broad-based approach, it may not ensure that taxpayers’ interests are fully protected. In our March 2009 report, we recommended that Treasury update guidance available to the public on determining warrant exercise prices to be consistent with actual practices applied by OFS. Treasury has since updated its frequently asked questions on its Web site to clarify the process it follows for determining the prices. However, there continues to be inconsistent guidance available on the Web site for calculating the exercise prices. Treasury told us that because any new CPP applicants would most likely be nonpublic institutions, the existing guidance documents would not apply. Therefore, Treasury does not believe the inconsistent guidance is a significant issue and therefore does not plan on further addressing the inconsistency. OFS continues to take important steps toward better reporting on and monitoring of CPP. These steps are consistent with our prior recommendations that Treasury bolster its ability to determine whether all institutions’ activities are generally consistent with the act’s purposes. On May 15, 2009, Treasury published the fourth monthly bank lending and intermediation snapshot and survey. In April 2009, Treasury started collecting basic information from the 21 largest CPP recipients on their lending to small businesses in the monthly lending surveys. According to Treasury, these data will be published in June 2009. These monthly surveys are a step toward greater transparency and accountability for institutions of all sizes. Survey results will allow Treasury’s newly created team of analysts to understand the lending practices of CPP participants and will help in measuring the program’s effectiveness in achieving its goal of stabilizing the financial system by enabling the institutions to continue lending during the financial crisis. We will continue to monitor Treasury’s oversight efforts, including implementation of its new survey of all other CPP recipients. In addition, on June 1, 2009, Treasury published the results of its first monthly survey of lending at all CPP institutions. These data include loans outstanding to consumers, commercial entities and total loans outstanding. This survey will continue on a monthly basis going forward. The survey and the results can be found at www.financialstability.gov. Also, and consistent with our prior recommendations, Treasury has continued to take steps to increase its oversight of compliance with terms of the CPP agreements, including limitations on executive compensation, dividends, and stock repurchases. Participating institutions are required to comply with the terms of these agreements, and we recommended that Treasury develop a process to monitor and enforce them. According to Treasury, it relied on its custodian bank—Bank of New York Mellon—to collect relevant information from a variety of informal sources, such as Securities and Exchange Commission filings and press releases and information provided by CPP participants. According to Treasury, if OFS becomes aware of any instances of noncompliance with requirements, they are to refer the instances to its Chief Risk and Compliance Office, which would work with the CPP office, to determine if further action is needed. On April 22, 2009, Treasury hired three asset management firms that will play a role in this process. According to Treasury officials, the asset managers’ primary role will be to provide Treasury with market advice about its portfolio of investments in financial institutions and corporations participating in various TARP programs. The managers will also help OFS monitor compliance with limitations on compensation, dividend payments, and stock repurchases. Treasury said that it is also exploring software solutions and other data resources to improve compliance monitoring. We plan to continue monitoring this area. As we have noted previously, without a more structured mechanism in place, and with a growing number of institutions participating in TARP, ensuring compliance with these important requirements will become increasingly challenging. While the institutions are obligated to comply with the terms of the agreement, Treasury has not yet developed a process to help ensure compliance and to verify that any required certifications are accurate. On June 10, 2009, Treasury adopted an interim final rule to implement the executive compensation and corporate governance provisions of the act, as amended by ARRA, as well as to adopt certain additional standards deemed necessary by the Secretary to carry out the purposes of the act. The interim final rule requires that recipients of TARP financial assistance meet standards for executive compensation and corporate governance. The requirements generally include limits on compensation that exclude incentives for senior executive officers to take unnecessary and excessive risks that threaten the value of TARP recipients; provision for the recovery of any bonus, retention award, or incentive compensation paid to a senior executive officer or the next 20 most highly compensated employees based on materially inaccurate statements of earnings, revenues, gains, or other criteria; prohibition on making any golden parachute payment to a senior executive officer or any of the next 5 most highly compensated employees; prohibition on the payment or accrual of bonus, retention awards, or incentive compensation to senior executive officers or certain highly compensated employees, subject to certain exceptions for payments made in the form of restricted stock; and prohibition on employee compensation plans that would encourage manipulation of earnings reported by TARP recipients to enhance employees’ compensation. The new rule also requires the (1) establishment of a compensation committee of independent directors to meet semiannually to review employee compensation plans and the risks posed by these plans to TARP recipients; (2) adoption of an excessive or luxury expenditures policy; (3) disclosure of perquisites offered to senior executive officers and certain highly compensated employees; (4) disclosure related to compensation consultant engagement; (5) prohibition on tax gross-ups (payments to cover taxes due on compensation) to senior executive officers and certain highly compensated employees; and (6) compliance with federal securities rules and regulations regarding the submission of a nonbinding resolution on senior executive officer compensation to shareholders. The new interim regulations also require the establishment of the Office of the Special Master for TARP Executive Compensation (Special Master) to address the application of the rules to TARP recipients and their employees. Among the duties and responsibilities of the Special Master, with respect to TARP recipients of exceptional assistance, is to review and approve compensation payments and compensation structures applicable to the senior executive officers and certain highly compensated employees, and to review and approve compensation structures applicable to certain additional highly compensated employees. Companies receiving exceptional assistance include those receiving assistance under the SSFI, TIP, and AIFP and currently include AIG, Bank of America, Citigroup, Chrysler, Chrysler Financial, GM, and GMAC. TARP recipients not receiving exceptional assistance may apply to the Special Master for an advisory opinion with respect to compensation payments and structures. The Special Master will also have responsibility for administering the review of bonuses, retention awards, and other compensation paid to employees of TARP recipients before February 17, 2009, and the negotiation of appropriate reimbursements to the federal government. Finally, the interim final rule also establishes compliance reporting and record-keeping requirements regarding the rule’s executive compensation and corporate governance standards. While no funds had been disbursed under CAP as of June 12, 2009, regulators have announced the results of stress tests that were a key component of the program. Moreover, Treasury announced that institutions interested in CAP funding are required to submit CAP applications to their primary banking regulators by November 9, 2009. According to Treasury, no CAP applications have been received. In a process similar to the one used for CPP, the regulators are to submit recommendations to Treasury regarding an applicant’s viability. A key component of the program is the Supervisory Capital Assessment Program (SCAP) or stress test of the 19 largest U.S. bank holding companies—those with risk-weighted assets of at least $100 billion—that together account for approximately two-thirds of the assets in the aggregate U.S banking industry. The federal banking regulators designed the assessment as a forward-looking exercise intended to help them gauge the extent of the additional capital buffer necessary to keep the institutions strongly capitalized and lending even if economic conditions are worse than had been expected between December 2008 and December 2010. On Thursday May 7, 2009, the Federal Reserve released the stress test results. Bank regulators found that 10 of the institutions needed to raise additional capital (via the private sector or CAP) to meet capital standards that would allow them to continue lending to creditworthy borrowers and absorb potential losses. The stress tests involved two economic scenarios, one representing the baseline expectation and the other a more adverse outlook involving a deeper and more protracted downturn. According to the Federal Reserve, the more adverse outlook was not intended to be a worst-case scenario but rather a deliberately stringent test designed to account for highly uncertain financial and economic conditions by identifying the extent to which a bank holding company is vulnerable today to a weaker than expected economy in the future. The required capital buffer was sized based on the more adverse scenario. While the forecast for the three economic indicators—GDP growth, unemployment rates, and home price changes—were considered quite severe at the time they were formulated in February, subsequent data indicated that the probability of the more adverse scenario was likely higher than previously thought, particularly with respect to the unemployment rate. According to Federal Reserve officials, house prices are at least as important as the unemployment rate in determining estimated losses at banks over the next 2 years because many of the estimated losses are related to real estate values. The specified trend in house prices under the more adverse scenario still represents a very severe outcome. These are areas that we plan to continue to monitor. Based on data as of December 31, 2008, the Federal Reserve estimated that total losses for the 19 companies during the 2009 to 2010 period would be approximately $600 billion, in addition to any losses prior to 2009 (table 6). As a result, the total losses for the top 19 U.S. bank holding companies since the beginning of the financial crisis in the second quarter of 2007 would be nearly $950 billion. The $600 billion represents a 7.7 percent loss of total risk-weighted assets for the 19 companies. The U.S. bank holding companies were asked to list available resources that they could use to absorb losses without impacting capital. Primary among these was the allowance for loan and lease losses as of year end of 2008 and preprovision net revenue, or the expected recurring income from ongoing business lines before any credit costs. The SCAP buffer for each bank holding company is defined as the incremental capital that must be provided to ensure that the bank would be able to meet two capital ratio tests at December 31, 2010, assuming losses under the more adverse scenario. First, tier 1 common capital to risk-weighted assets must be at least 4 percent, and second, tier 1 capital to risk-weighted assets must be at least 6 percent at December 31, 2010. While some market observers have been critical of the process by which regulators shared preliminary results with the bank holding companies and made subsequent adjustments based on feedback from the bank holding companies, Federal Reserve officials noted that such discussions are a normal part of the examination process. Further, Federal Reserve officials explained that the adjustments to the capital shortfall or “SCAP Buffer” largely reflected addressing data errors, double counts, and other technical issues, rather than to present any substantive arguments made by the U.S. bank holding companies. We will be evaluating this process and will report on our results in a future report. While the data used was as of December 31, 2008, some banks reported significant earnings and capital increases in the first quarter of 2009 from asset sales, announced common equity issuances, and in one case the announced, but not yet completed, conversion of preferred shares to common shares. The regulators incorporated these changes into their analysis. The results showed that 10 of the 19 institutions needed to raise a total of almost $75 billion in equity capital (table 7). As required, the institutions submitted capital plans to the Federal Reserve on June 8, 2009, on how they plan to raise the needed capital and will have a total of 6 months in which to raise the capital from private markets (common equity offerings, assets sales, and the conversion of other forms of capital into common equity) or additional government assistance through CAP. As of June 12, 2009, eight of the 19 U.S. bank holding companies have announced or raised a total of $59.2 billion toward the required $75 billion. Both Treasury and Federal Reserve officials emphasized the unprecedented nature of the detailed bank-level disclosure of both losses and revenue forecasts in the stress tests. However, Federal Reserve officials told us that they had no plans to provide periodic updates of actual performance of the U.S. bank holding companies in the stress tests relative to loss or revenue estimates under the more adverse scenario. Federal Reserve officials said they view this information as part of the supervisory process. While the Federal Reserve shared preliminary results of the stress test with senior Treasury officials, it neither shared the results of the stress tests with CPP officials prior to the public release nor does it plan to provide any additional routine information going forward. However, federal Reserve officials said that supervisory information can be provided to Treasury on a confidential basis when Treasury has a significant program need for the information. Moreover, whether and to what extent the bank holding companies will disclose additional information is unclear. These decisions raise a number of potential concerns. First, to the extent that information is disclosed by the institutions, it may be disclosed selectively and may not be consistent across institutions and could lead to increased market uncertainty. Second, because the stress tests were conducted as part of CAP, not making the results available to OFS officials for ongoing participants could adversely impact Treasury’s ability to monitor the program. Finally, such information would be useful in the measurement of the effectiveness of SCAP and CAP. Without it, the public will not have reliable information that can be used to gauge the accuracy of the stress test projections on a more detailed basis than what has been disclosed in the SCAP papers. With respect to the 19 U.S. bank holding companies that participated in SCAP, on June 1, 2009, the Federal Reserve released the criteria it plans to use to evaluate applications to repurchase Treasury’s capital investments. The items published are similar to those already in use to evaluate repurchase requests that had been received from smaller bank holding companies, and include the following considerations the bank holding company’s ability to continue to act as an intermediary and spur lending to creditworthy households and businesses, whether the bank holding company’s post-repurchase capital position is consistent with the Federal Reserve’s supervisory expectations, whether the bank holding company will maintain its financial and management support for its subsidiary banks subsequent to repurchase, and whether the bank holding company and subsidiaries are in a position to meet all of their funding and counterparty obligations without government capital or utilization of the FDIC’s Temporary Liquidity Guarantee Program. Finally, the Federal Reserve stated that the U.S. bank holding companies that participated in the SCAP process seeking to repurchase CPP would be subject to the following additional criteria: A demonstrated ability to raise long-term debt without any FDIC guarantee or equity in the public equity market. Progress towards a robust longer-term capital assessment and management process geared toward achieving and maintaining a prudent level and composition of capital commensurate with their business activities and firm-wide risk profile. The Federal Reserve in consultation with the U.S. banking holding companies’ primary bank regulator and FDIC informed Treasury on June 9, 2009, that it had no objection to the repurchase of preferred shares by 9 of the SCAP bank holding companies. Also on June 9, 2009, Treasury announced that these 9 U.S. bank holding companies, and one other large institution, met the requirements for repayment and would be eligible to repay $68 billion to Treasury. In May 2009, the Federal Reserve announced some modifications to TALF, a program administered by the Federal Reserve but part of the President’s broader strategy to restart lending. As we have previously reported, the Federal Reserve originally designed TALF to make nonrecourse loans to fund purchases of asset-backed securities (ABS) that are secured by eligible consumer and small business loans. The modifications to TALF include the addition of two asset classes, an extension of certain TALF loan terms, and additions to the credit rating agencies approved for rating TALF-eligible collateral. The additional asset classes accepted for collateral are commercial mortgage-backed securities (CMBS) and securities backed by insurance premium finance loans. CMBS are securities backed by mortgages for commercial real estate, such as office buildings or shopping centers. The Federal Reserve noted that it had extended the range of eligible collateral to include CMBS to help prevent defaults on viable commercial properties, encourage further lending for commercial properties, and encourage the sale of distressed properties. CMBS issued on or after January 1, 2009, and “legacy” CMBS issued prior to January 1, 2009, will be accepted. The Federal Reserve Bank of New York has specified a number of requirements that must be met before it will accept this collateral—for example, CMBS must have the highest long-term investment grade credit rating available from certain credit rating agencies. The Federal Reserve will include nonlegacy CMBS in its June subscriptions for TALF loans and legacy CMBS in its July subscriptions. The Federal Reserve also announced that it would accept securities backed by insurance premium finance loans. These securities will be included to encourage the flow of credit to small businesses, one of the goals of TALF under the Consumer and Business Lending Initiative. Furthermore, the Federal Reserve extended the available terms for certain TALF loans from 3 years to 5 years to finance purchases of CMBS and ABS backed by student loans and SBA-guaranteed loans. The Federal Reserve will limit financing to $100 billion for loans with 5-year maturities. The volume of loans requested for TALF collateral increased significantly in May and June 2009, compared with the previous 2 months (table 8). Additionally, loans requested in March and April 2009 were provided only on collateral for auto and credit card securitizations, whereas May 2009 subscriptions extended to student loan, small business, and equipment securitizations for the first time. June 2009 subscriptions included the first loans requested for securities based on insurance premium finance loans and servicing advances. The total amount of loans requested on TALF- eligible collateral since the program’s first activity is $28.5 billion. On May 19, 2009, the Federal Reserve expanded the number of credit rating agencies approved for rating TALF-eligible collateral from three to five. All collateral accepted under TALF, with the exception of ABS backed by SBA-guaranteed small business loans and related debt instruments, must receive the highest investment-grade rating from at least two TALF-eligible rating agencies. Fitch Ratings, Moody’s Investors Service, and Standard & Poor’s are eligible rating agencies for all ABS. DBRS, Inc. and Realpoint LLC are two additional TALF-eligible rating agencies for CMBS collateral. As we previously reported, PPIP consists of the Legacy Loans Program and the Legacy Securities Program. Treasury and FDIC have been finalizing the terms of the Legacy Loans program. On March 26, 2009, FDIC announced that it was seeking public comments on a number of elements of the program. FDIC officials at the time stated that the implementation date for the program would depend on the nature of the comments received and the time required to consider them for the design of the program. FDIC officials with whom we spoke said that the implementation date of the program remained unclear because of changes to accounting rules, potential participants’ concerns about having to write-down assets, and TARP-related restrictions. More recently, on June 3, 2009, FDIC announced that a previously planned pilot sale of assets by open banks will be postponed. In making that announcement, the Chairman stated that banks have been able to raise capital without selling bad assets but that FDIC will continue to work on the Legacy Loans Program and will be prepared to offer it in the future. Further, FDIC announced that it intended to test the Legacy Loans Program funding mechanism in a receivership assets sale with bids to begin in July. For the Legacy Securities Program, Treasury is currently reviewing fund manager applications. Treasury extended the application deadline for these fund managers from April 10, 2009, to April 24, 2009, in part to give small businesses and businesses owned by veterans, minorities, and women the ability to partner with larger fund managers in the program. Treasury initially announced that it anticipated prequalifying about 5 fund managers from about 100 applications; however, it later clarified that more than five fund managers may be prequalified depending on the number of applications deemed to be qualified. A public announcement of the selections will be made in June 2009. Treasury officials estimated that it could take the fund managers as long as 12 weeks to raise capital for the funds and it is difficult to determine how soon Treasury would be contributing matching capital and financing to the funds. As we previously reported, Treasury, Federal Reserve, and SBA have plans in place to contribute to the administration’s efforts to improve the accessibility and affordability of credit to small businesses. Treasury announced on March 16, 2009, that it would set aside $15 billion of TARP funds to directly purchase securities based on 7(a) and 504 small business loans guaranteed by SBA. TALF, managed by the Federal Reserve Bank of New York, is also a part of the efforts to increase access to credit for small businesses. Under TALF, securities consisting of SBA-guaranteed 7(a) and 504 small business loans are provided as collateral to the Federal Reserve, and in return TALF provides loans, with the goal of encouraging securitizations for SBA-guaranteed debt. For its part, SBA has been directed under ARRA to implement administrative provisions to help facilitate small business lending and enhance liquidity in the secondary markets. These administrative provisions include (1) temporarily requiring SBA to reduce or eliminate certain fees on 7(a) and 504 loans; (2) temporarily increasing the maximum 7(a) guarantee from 85 percent to 90 percent; and (3) implementing provisions designed specifically to facilitate secondary markets, such as extending existing guarantees in the 504 program and making loans to systemically important broker-dealers that operate in the 7(a) secondary market. These initiatives are in various stages of implementation. Treasury has not yet purchased securities related to the Small Business and Community Lending Initiative, though it had stated that it expected to purchase 7(a)- related securities by the end of March 2009 and 504-related securities by the end of May 2009. A Treasury official said that Treasury has faced challenges implementing the program because of sellers’ concerns about warrants and executive compensation, as stipulated under the act, as amended by ARRA. Treasury is reaching out to these sellers and anticipates completing term sheets in June 2009. Federal Reserve efforts related to small businesses have also started. As shown in table 8, in May 2009, TALF received collateral for and offered loans based on 7(a) and 504-related small business securities for the first time. Loans requested since May related to these small business securities total about $169 million. SBA, as we reported to congressional committees, issued policy notices to temporarily reduce or eliminate certain fees for 7(a) and 504 loans and temporarily increase the maximum 7(a) guarantee, effective as of March 16, 2009. SBA formalized its implementation of these provisions in Federal Register notices on June 8, 2009. However, the SBA has not yet implemented provisions intended to enhance secondary markets. On May 7, 2009, Citigroup announced that it would expand its planned exchange of preferred securities and trust preferred securities for common stock from $27.5 billon to $33 billion. The stress test found that Citigroup would need an additional $5.5 billion in tier 1 common capital, for a total of $58.1 billion, to ensure adequate capital for the more adverse economic scenario. On June 9, 2009, Treasury and Citigroup finalized their exchange agreement and Treasury agreed to convert up to $25 billion of its Treasury CPP senior preferred shares for interim securities and warrants and its remaining preferred securities for trust preferred securities so that the institution could strengthen its capital structure by increasing tangible common equity. As part of the agreement, Citigroup agreed to offer to convert both privately placed and publicly issued preferred stock held by other preferred shareholders. To increase the exchange by $5.5 billion, Citigroup decided to offer to exchange more publicly held preferred stock and trust preferred securities for common stock. Treasury and Citigroup finalized the exchange agreement on June 9, 2009. According to OFS officials, the conversion of the government preferred shares to common stock will not be finalized until the exchange of $33 billion of preferred securities and trust preferred securities has been completed. In addition, Citigroup has taken a number of other actions designed to improve Citigroup’s capital and financial position including the sale of Nikko Cordial Securities and a joint venture with Morgan Stanley relating to its brokerage subsidiary, Smith Barney. See appendix V for additional information about the condition of Citigroup. Citigroup issued its first 2009 quarterly TARP progress report on May 12, 2009. Citigroup reported that it had authorized initiatives to deploy $44.75 billion in TARP capital. According to the report, $8.25 billion of new funding initiatives were approved during the first quarter of 2009 to expand the flow of credit to consumers, businesses, and communities. For example, Citigroup lent $1 billion to qualified borrowers to help homeowners refinance their primary residence. According to Treasury officials, Citigroup issued this report voluntarily and Treasury had not verified the information it contained. Treasury completed the previously announced restructuring of its support for AIG by exchanging $40 billion of cumulative Series D preferred shares for $41.6 billion of noncumulative Series E preferred shares. The amount of Series E preferred shares is equal to the original $40 billion plus approximately $733 million in dividends undeclared on February 1, 2009; $15 million in dividends compounded on the undeclared dividends; and an additional $855 million in dividends accrued from February 1, 2009, but not paid as of April 17, 2009. Our tests of selected control activities found that Treasury had applied adequate financial reporting controls over the restructuring transaction. AIG’s restructured agreement kept the quarterly dividend payment dates of every May 1, August 1, November 1, and February 1 that were established in the original November 25, 2008, agreement. However, the restructured agreement also specified that dividends are not payable within 20 calendar days of the restructuring date and that the dividends for a period of fewer than 20 days would be payable in the subsequent dividend period. Accordingly, in compliance with these dividend payment terms, the dividends for the period from April 17 through May 1, 2009, which amounted to approximately $150.2 million, are to be included in the August 1, 2009, scheduled dividend payment. Treasury also finalized its approximately $30 billion Series F preferred stock capital facility with AIG on April 17, 2009. In our March report, we recommended that Treasury require that AIG seek concessions from stakeholders—such as management, employees, and counterparties— including seeking to renegotiate existing contracts, as appropriate, as it finalized this agreement. While Treasury extended negotiations several weeks, the negotiations did not result in material changes to the final agreement. According to Treasury, AIG had been consulting with Treasury on any substantial compensation payments until interim final executive compensation rules were issued on June 10, 2009. Since we last reported on the Automotive Industry Financing Progarm (AIFP), Treasury has provided additional funding to the auto industry, including amounts to assist GM and Chrysler, which have filed voluntary petitions for reorganization under Chapter 11 of the U.S. Bankruptcy Code, bringing Treasury’s total commitments under this program to approximately $82.6 billion. Treasury committed to providing additional funding to support the companies both during and after their respective reorganizations, in the amounts of $8.5 billion for Chrysler and $30.1 billion for GM. In exchange for providing this funding, Treasury is to be repaid over a period of years for a portion of the amounts provided and will receive equity ownership in Chrysler and GM. Table 9 shows the amounts Treasury has provided or committed to providing under AIFP and its plans for being repaid for or otherwise recovering this funding. In the case of Chrysler, on April 30, 2009, the White House announced that Treasury would provide more than $8 billion in additional funding to help finance Chrysler’s operations through bankruptcy and that Chrysler would attempt to arrange an alliance with the Italian automaker Fiat as part of its restructuring. On June 1, 2009, a bankruptcy judge approved Chrysler’s restructuring proposal, including the alliance with Fiat, the sale of its assets to the new Chrysler, and the additional funding from Treasury. On June 9, 2009, the asset sale was finalized, and Treasury executed a loan agreement with the restructured Chrysler under which the company will be required to repay Treasury $7.1 billion, secured by a senior lien on all of the new Chrysler’s assets. This new loan includes $500 million of the prebankruptcy loan that was secured by a senior lien on Mopar— Chrysler’s parts business. Although Chrysler signed a loan agreement with Treasury for the entire $4.0 billion of the prebankruptcy loan, Treasury officials said that the U.S. government will likely recover little of this amount because other debt holders have seniority for being repaid. However, in further consideration of the funding to the restructuring of Chrysler, Treasury is initially receiving a 10 percent equity stake in the new company. In the case of GM, on June 1, 2009, Treasury announced that it would make $30.1 billion of financing available to support an expedited bankruptcy proceeding and to transition the new GM through its restructuring plan. If GM’s restructuring proposal is approved by the bankruptcy court—in exchange for the $30.1 billion in bankruptcy funding, as well as the $19.4 billion in prebankruptcy funding—the U.S. government would receive about $6.7 billion of debt, $2.1 billion in preferred stock, and approximately 61 percent of the equity in the new GM. At the present time, Treasury said it does not plan to provide additional assistance to GM beyond this commitment. As part of the companies’ reorganization, they have also reached agreements with other stakeholders to resolve outstanding obligations, including by offering these stakeholders equity shares in the companies. The agreements with each stakeholder group are discussed in more detail in the following paragraphs, and the companies’ equity ownership following restructuring is shown in figure 2. Auto workers and retirees: The International Union, United Automobile, Aerospace and Agricultural Implement Workers of America reached agreements separately with Chrysler and GM on modifications to the existing labor contract, as specified by the terms of Treasury’s prebankruptcy loans to the companies. The agreements will be applicable to the reorganized companies. Chrysler and GM also developed plans to meet their obligations for funding their retiree healthcare funds, also known as voluntary employee beneficiary associations (VEBA). In the case of Chrysler, the VEBA will be funded by a note of $4.6 billion and will receive 55 percent of the new company’s fully-diluted equity. In the case of GM, the company will fund its VEBA trust with a $2.5 billion note, $6.5 billion in preferred stock, 17.5 percent of the equity in the new GM, and warrants to purchase an additional 2.5 percent of the company. Both GM and Chrysler VEBAs will have the right to select one independent director for their respective company’s board, but will have no other governance rights. Regarding the companies’ pension plans, as we have previously reported, the termination of either company’s plans would result in a substantial liability to the federal Pension Benefit Guaranty Corporation (PBGC), which insures private-sector defined benefit pension plans. However, at this time, the companies do not intend to terminate their plans, which will be transferred to the new companies as part of the reorganization. Canadian government: The Canadian government will provide restructuring funding to and become a shareholder of both companies. In total, the Canadian government has provided $3 billion to Chrysler and will hold $1.9 billion in debt and a 2.5 percent equity stake in the reorganized company. For GM, the Canadian government will fund $9.5 billion in exchange for $1.7 billion in debt and preferred stock and approximately a 12 percent equity stake in the new GM. As a shareholder the Canadian government will have the right to select members of Chrysler’s and GM’s boards of directors. Former shareholders and creditors: In the case of Chrysler, Daimler AG and Cerberus Capital, which together held 100 percent of Chrysler’s prebankruptcy equity and $4 billion of Chrysler’s debt, will relinquish their equity stakes and waive their share of debt holdings. Chrysler’s largest secured creditors agreed to exchange their portion of the $6.9 billion secured claim for a proportional share of $2 billion in cash. In the case of GM, bondholders representing more than half of GM’s $27.1 billion in unsecured bonds have agreed to exchange their portion of bonds for 10 percent equity and warrants for an additional 15 percent in the restructured company. About $6 billion in debt held by GM’s secured bank lenders will be repaid from proceeds of the loan GM received from Treasury and the Canadian government after it filed for bankruptcy. Fiat: As part of the alliance, Fiat has contributed intellectual property and “know how” to the new Chrysler in exchange for a 20 percent equity share in the reorganized company. Fiat also has the right to select three directors for the reorganized company and the right to increase its ownership incrementally up to a total of 35 percent. As a shareholder of the reorganized companies, as well as a lender, Treasury will continue to have a monitoring and oversight role. For instance, Treasury will have the right to appoint four independent directors to Chrysler’s board and five directors to GM’s board. However, Treasury officials told us they do not plan to play a role in the management of the companies following the selection of these directors. In addition, the companies are to meet the following requirements: Establish internal controls to provide reasonable assurance that they are complying with the conditions of the loan agreements relating to executive compensation, expense policy reporting, asset divestiture, and compliance with the Employ American Workers Act, and report to Treasury each quarter on these controls. Collect and maintain records to account for their use of government funds and their compliance with the terms and conditions under the Auto Supplier Support Program and other federal support programs. Provide Treasury with periodic financial reports. Treasury officials said that they plan to require Chrysler and GM to submit monthly reporting packages containing the above items and to meet with the companies quarterly. They said that Treasury’s involvement in the companies will be on a commercial basis and that their interest is in ensuring the companies are in a position to repay the loans. We have previously reported that in a market economy, the federal role in aiding industrial sectors should generally be of limited duration and have noted the importance of setting clear limits on the extent of government involvement. Regarding assistance provided to the auto industry, Treasury should have a plan for ending its financial involvement with Chrysler and GM that indicates how it will both divest itself of its equity shares—and the attendant responsibilities for appointing directors to the companies’ boards—and ensure that it is adequately repaid for the financial assistance it has provided. In developing and implementing such a plan, it should weigh the objective of expeditiously ending the government’s financial involvement in the companies with the objective of recovering an acceptable amount of the funding provided to these companies. Treasury has taken steps in this direction, including establishing repayment terms for the loan provided to the new Chrysler as part of its reorganization and developing plans to sell its equity in the companies over a period of years in a manner calculated to maximize its value. We plan to monitor Treasury’s efforts to develop and implement a plan for ending the government’s financial involvement with the automakers and will report our findings in future reports as appropriate. In April 2009, Chrysler filed for bankruptcy. On May 20, 2009, the bankruptcy court approved GMAC LLC (GMAC) as the preferred provider of new credit to Chrysler’s dealers and customers. Also in May 2009, the Federal Reserve through SCAP identified the need for GMAC to raise additional capital to be in compliance with SCAP results. The federal government indicated that it would provide additional assistance to GMAC to support GMAC’s ability to originate new loans to Chrysler dealers and consumers and help address GMAC’s capital needs as identified under SCAP. On May 21, 2009, Treasury purchased $7.5 billion of mandatorily convertible preferred membership interests from GMAC with an annual 9 percent dividend, payable quarterly. Treasury’s $7.5 billion investment included $4 billion to support GMAC and address its capital needs as identified through SCAP, which identified a need of $9.1 billion of new capital. After 7 years, the interests must be converted to GMAC common interests. Prior to that time, they may be converted at Treasury’s option upon specified corporate events (including public offerings). The shares may also be converted at GMAC’s option with the approval of the Federal Reserve, though any conversion at GMAC’s option must not result in Treasury owning in excess of 49 percent of GMAC’s common membership interests, except (1) with prior written consent of Treasury, (2) pursuant to GMAC’s capital plan, as agreed upon by the Federal Reserve, or (3) pursuant to an order of the Federal Reserve compelling such a conversion. On June 8, 2009, GMAC submitted a detailed capital plan to the Federal Reserve describing specific actions it has taken and plans to take to increase capital to meet its total SCAP capital needs. Under the agreement, GMAC also issued warrants to Treasury to purchase additional mandatorily convertible preferred membership interests in an amount equal to 5 percent of the preferred purchased membership interests. The warrant preferred shares provide an annual 9 percent dividend payable quarterly. According to Treasury, because the exercise price for the warrants is nominal and there were no downside risks to exercising the warrants immediately, Treasury exercised the warrants at closing and received an additional $375 million of mandatorily convertible preferred membership interests. Under the funding agreement, GMAC must comply with all executive compensation and corporate governance requirements of Section 111 of the act applicable to qualifying financial institutions under CPP. Treasury noted that the May 21, 2009, $7.5 billion capital investment would not immediately result in it holding any common membership interests in GMAC at that time. However, on May 29, 2009, Treasury exercised its option to exchange the $884 million loan it made to GM in December 2008 to acquire about 35 percent of the common membership interests in GMAC. In our March 2009 report, we noted that while Treasury had taken a number of steps to address the ongoing crisis, it had been hampered with questions about TARP decision making and activities, raising questions about the effectiveness of its existing communication strategy. As a result, we recommended that Treasury continue to develop an integrated communication strategy that may include, among other things, building understanding and support through the program, integrating communications and operations, and increasing the impact of communication tools such as print and video. Moreover, we emphasized the need for the communication strategy to establish a means to engage in regular and routine communication with Congress. Since our March 2009 report, Treasury said that it established a working group to address communications both within OFS and to external stakeholders. Treasury has stated that the working group is responsible for monitoring, reporting on, and addressing all OFS communication efforts, and has been developing a communications plan to build support for the various programs it has established under the act. Treasury also noted that its Financial Stability Plan provided the basis for its improved communication strategy. The current communication strategy for TARP utilizes and builds on existing resources, such as Treasury’s Office of Public Affairs and Office of Legislative Affairs. Officials from Treasury’s Office of Public Affairs and Office of Legislative Affairs told us that the Financial Stability Plan announced in February 2009 provided a base for the new administration launching its current communication strategy. To ensure that Treasury can communicate with the public and Congress in a timely manner, officials from Treasury’s Office of Public Affairs and Office of Legislative Affairs are included in regular policy meetings with OFS officials and officials from other offices in Treasury. As major changes occur, Treasury’s Office of Public Affairs—in conjunction with OFS, the Office of the Secretary, and the Office of Legislative Affairs—has established a routine approach to more fully communicate activities to the public. Specifically, the Office of Public Affairs has a process that involves timely issuance of press releases and white papers, holding media briefings, and conducting outreach to the academic and investor community. According to Treasury, policy officials from OFS and Domestic Finance are involved in this process. Moreover, the Office of Public Affairs told us that Treasury had dedicated a media and public affairs employee that works on TARP and in coordination with other senior members of the Public Affairs office. Staff from the Office of Legislative Affairs told us that they routinely communicate with congressional leadership and staff from key committees with jurisdiction over TARP activities, specifically noting the Senate Committee on Banking, Housing, and Urban Affairs and the House Committee on Financial Services. They also respond to a variety of questions and requests made to them by individual members’ and congressional staff on an ongoing basis. In addition, Treasury noted that on April 15, 2009, the Secretary transmitted written letters to congressional committees to provide a broad update on TARP-related activities, and on May 15, 2009, OFS staff provided background briefings to Congressional staff on TARP programs and recent developments. OFS told us they plan to provide additional briefings to congressional staff on a monthly basis. They also said that they are in the process of hiring a communications officer to work with the Office of Public Affairs and the Office of Legislative Affairs, who have two staff members dedicated to TARP, among other duties, to implement a coordinated communications strategy. Though these efforts may improve communication with congressional stakeholders, Treasury has yet to implement an approach that ensures all relevant stakeholders are routinely reached. For example, the act creating TARP includes several other committees of jurisdiction besides Senate Banking, Housing, and Urban Affairs and House Financial Services—the House and Senate Committees on Appropriations, the House and Senate Committees on Budget, the Senate Committee on Finance, and the House Committee on Ways and Means. However, according to Treasury officials, while they have more recently begun to outreach to others, their efforts have primarily been targeted to House Financial Services and Senate Banking. Treasury’s communication strategy, once finalized, should help ensure regular and proactive outreach to all of the committees of jurisdiction and Congress in general. Until the plans for regular outreach to Congress on TARP matters are implemented, Treasury risks that some congressional committees or staff may not be receiving consistent and timely information, increasing the likelihood of misunderstanding by Congress and according to Treasury officials, will continue to be inundated with ad hoc TARP-related inquires. Since our March 2009 report, Treasury has made operational its new Web site, www.financialstability.gov, to report TARP-related matters and has taken steps to improve the site’s effectiveness through the use of various communication tools. Treasury said that this effort is part of a refocused public communications initiative to enhance communications on how TARP strategies will stabilize the financial system and restore credit markets. According to Treasury, there are several key differences between the new site and the older Web page used to communicate TARP strategies, which was a part of the Treasury’s Web site. Specifically, Treasury officials told us that the new site is less technical than the former Web page and the intention was to provide details on TARP activities in a more user-friendly, simplified manner that is easier for the general public to understand. For example, the site features a “decoder” tool that translates frequently-used financial language and TARP program names, such as “asset-backed security,” to reach a wider audience. In addition, the site has provided information on all of the investments Treasury has made and the contractual terms of and participants in those investment programs. Treasury also posts a detailed monthly lending and intermediation survey on the Web site. Moreover, Treasury has provided links to program-related content provided on other federal agencies’ sites, such as frequently asked questions on the TALF posted by the Federal Reserve. Treasury has also tried to provide information to better address constituent interests. For example, the Web site has included an interactive map illustrating state-by-state bank and financial institution funding provided under TARP. According to Treasury, the site provided some information on warrant sales and repayments of principal investments made to various institutions under CPP. Consistent with our recommendation aimed at better disclosure of monies paid to Treasury, it now includes dividends and interest received in its periodic reports to Congress that are also posted to the Web site, and according to Treasury, it is in the process of creating a mechanism to report dividends received under the various TARP programs on the Web site. Treasury also created a separate Web site— www.makinghomeaffordable.gov—in order to communicate about the homeownership preservation program established under TARP. Treasury said that it has coordinated closely with the White House, HUD, FHFA, Fannie Mae, and Freddie Mac in developing a means to communicate information on the Making Home Affordable program to stakeholders across the country. The Web site includes information targeted to homeowners on refinancing and loan modifications and, according to Treasury, as of May 29, 2009, the site has received more than 19.5 million hits. In other work, we have noted that best practices useful for improving the quality of federal public Web sites include conducting usability testing of Web sites and developing performance measures or other means to gauge customer satisfaction, such as conducting surveys and convening focus groups. Treasury is in the process of entering into an agreement with a vendor to conduct usability testing of the Web site. According to a Treasury official, small surveys of site visitors will be conducted and every six months the vendor will suggest changes to improve the Web site. While Treasury said that the new Web site was designed to make information less technical and accessible to a wider audience, until Treasury gauges whether the new www.financiastability.gov Web site provides more useful and easily found information to the general public than the old Web page, Treasury lacks a meaningful measure of the effectiveness of its communication strategy. The lack of ready access to key information on some recent TARP developments on the new www.financialstability.gov Web site underscores the need to seek input from others in making continuous improvements in TARP-related communications. For example, users from the general public, unfamiliar with the TARP terminology, would have difficulty finding basic descriptive information on the stress test initiative announced February 2009 under the administration’s Financial Stability Plan. Among other things, we found that the Web site lacked readily-found information on the components of the test and test results. Further, while Treasury officials said that the decoder tool intends to translate more technical program information, as of June 4, 2009, we found no information in the decoder tool or elsewhere on the Web site to let users know that the stress test is now formally referred to in Treasury press releases as SCAP. Since our March 2009 report, Treasury has continued to take steps to hire permanent OFS staff and detailees to fill short- and long-term organizational needs. First, Treasury has continued to seek qualified successors for various permanent leadership positions, including the Chief Investment and Chief Homeownership Preservation officers. Until permanent successors are identified, Treasury has appointed an Acting Chief Investment Officer and appointed an interim Chief Homeownership Preservation Officer to head these areas of OFS. In addition, Treasury has created a new senior position within OFS—a senior restructuring official—to oversee major investments that have been made under TARP. The administration has also nominated an individual to become the Assistant Secretary of Financial Stability. This appointment, which is subject to Senate confirmation, would fill the vacancy created by the departure of the Interim Assistant Secretary of Financial Stability, who had served in this capacity since TARP was created in October 2008. Second, Treasury has increased the number of permanent OFS staff. As of June 8, 2009, OFS had 166 total staff, with the number of permanent staff rising from 77 to 137 since our March 2009 report and the number of detailees decreasing to 29 (see fig. 3). In its latest budget request to OMB, Treasury anticipated that OFS would need 225 full-time employees to operate at full capacity in fiscal year 2010, an increase of 29 from its March 2009 estimate of 196. Having both detailees and long-term staff helps OFS meet its short- and long-term needs. Treasury continues to anticipate that permanent staff will support long-term responsibilities, while detailees will continue to play an important role by supporting the flexibility of OFS operations. Currently, some offices are more fully staffed than others. OFS provided information on 2 types of vacancies—ones the agency is currently in the process of hiring for (current vacancies)—and ones that the agency anticipates based on the projected size of each office over time (anticipated vacancies). While the offices of the Chief Financial Officer and Chief Investment Officer have identified only a few current vacancies, the offices of the Chief Risk and Compliance Officer and Chief Homeownership Preservation Officer have identified several current vacancies (table 10). Current vacancies that Treasury has identified within OFS include senior positions for program compliance within the office of the Chief Risk and Compliance Officer and leadership positions for data analysis and communications and marketing within the office of the Chief Homeownership Preservation Officer. In some instances, OFS has filled important personnel gaps. For example, since our March 2009 report, OFS has filled two new staff positions for program and data management analysts to support its oversight of financial agents. Treasury has made progress in developing a more routine process for hiring OFS staff. During the transition from the previous administration, with new TARP responsibilities still emerging and OFS functional areas still developing, Treasury employed an informal approach to hiring staff in order to bring employees on board expeditiously and meet immediate mission needs. As TARP activities have solidified and become more stable, Treasury and OFS staff have been better able to identify the skills and abilities OFS needs and develop a more structured process for hiring. Currently, Treasury routinely updates its Web site, www.financialstability.gov, to inform potential candidates of new OFS vacancies. These vacancy announcements are linked to job announcements posted on the USAJOBS Web site. Additionally, Treasury has developed more systematic approaches to reviewing applications and interviewing candidates. For example, Treasury recently updated its standard operating procedures for hiring staff to OFS. This includes a procedure describing how to bring on board federal employees to serve as detailees in OFS. While Treasury has developed more formal processes for assessing candidates seeking employment with OFS, the department still uses flexible hiring strategies in order to ensure that it is recruiting candidates with the right skill sets and abilities to meet OFS mission needs. For example, Treasury still utilizes the flexibilities provided under direct hire authority to select candidates for employment who do not submit formal applications via www.usajobs.gov. Nonetheless, Treasury officials said that they encourage all candidates expressing interest in OFS employment to apply via announcements posted on www.usajobs.gov whenever feasible. In addition, to retain critical skills learned on the job, Treasury has established a process to ensure knowledge transfer between outgoing and incoming OFS detailees. Treasury continues to experience challenges in hiring qualified employees, however, in part due to pay disparities with federal financial regulatory agencies. In the past, Treasury told us that it had identified candidates with the right skills and abilities to fill various OFS positions, but these candidates often worked for financial regulators that could offer more competitive salaries than OFS. To mitigate the effects of pay differences, Treasury has employed some strategies that are available to all federal agencies. In particular, Treasury has utilized maximum payable rates and offered promotions to mid-level career employees. According to Treasury, these incentives have been helpful in hiring some employees who had previously worked at financial regulatory agencies. Nonetheless, Treasury noted that while these tools have been useful in attracting lower- and mid-level career employees, they do not always address substantial differences between the compensation OFS can offer senior executives and the rates offered by financial regulators. In addition, while the department has the ability to use recruitment bonuses, use of this incentive has been limited to employees who are not currently government employees and therefore has not been used to recruit employees from financial regulatory agencies. Moreover, while Treasury may use relocation bonuses, its use of these for recruiting employees from financial regulatory agencies has been limited because most candidates currently working for financial regulatory agencies would not have to relocate to accept a position in OFS. As mentioned in our prior work, Treasury has told us that vetting OFS candidates’ potential conflicts of interest has added time to the hiring process. In particular, there has been heightened concern about employees’ financial interests creating potential conflicts because TARP decision-making activities often involve providing funds to various financial institutions and targeting assistance to certain types of investments (such as mortgage-backed securities) that new employees might hold. Treasury officials told us they had taken a number of steps to manage potential conflicts of interest. First, Treasury officials have been obtaining information on candidates’ potential conflicts earlier in the hiring process, through preliminary reviews of information provided on financial disclosure reports. OFS employees are subject to the same laws and regulations covering ethical codes of conduct as employees of other executive branch agencies. Accordingly, OFS employees are prohibited from participating personally and substantially in a particular matter that will affect their financial interests or those of (1) a spouse or minor child; (2) a general partner; (3) an organization for which they serve as an officer, director, trustee, general partner or employee; or (4) a person with whom they are negotiating for employment or have an arrangement concerning prospective employment. In accordance with the Ethics in Government Act, Senate-confirmed appointees, members of the Senior Executive Service, and other senior- level executive branch employees must disclose assets and other interests that are attributable to them when beginning federal service and annually thereafter in a public financial disclosure report. Other OFS employees whose duties involve the exercise of significant discretion are required by regulation to report their financial interests on a confidential financial disclosure report (see table 11). Employees required to file a financial disclosure report must do so within 30 days of appointment, unless granted an extension. Treasury said it had obtained and retained a copy of the financial disclosure reports filed by detailees with their home agencies. Treasury has used databases to track reviews of Treasury employee financial disclosure reports. These databases provide sufficient evidence to demonstrate that, in general, OFS employees have filed financial disclosure reports within 30 days of their appointment. We found that in all but two cases, individuals required to complete these reports filed them within 30 days of their appointment to OFS. In one case, the employee was granted an extension to file and filed before the expiration of the extension period. In the other case, the employee appears to have submitted the report on time, but it was not officially marked as received by Treasury ethics counsel until 1 business day after the expiration of the 30-day time-to-file period. Our analysis also supports Treasury’s statement that it usually vets conflicts of interest earlier in the hiring process for OFS staff than for employees in other areas of Treasury. We found that, on average, permanent OFS employees required to submit confidential financial disclosure reports filed them about 21 days before their appointment. Moreover, we found that the majority of OFS employees coming from outside the federal government who were required to submit public financial disclosure reports filed the reports in advance of their appointment to OFS. To address the unique aspects of TARP operations in its reviews of OFS employees’ financial disclosure reports, Treasury established new internal operating procedures on February 17, 2009, concerning the submission and review of OFS employees’ confidential financial disclosure reports. To facilitate a preliminary identification and communication of obvious potential conflicts, the new procedures set out as a goal to have OFS candidates submit for initial review confidential financial disclosure reports with Treasury ethics counsel before their formal appointment to OFS. Generally, Treasury has followed this new procedure. In our review, we found that of the 31 employees filing confidential financial disclosure reports who were appointed to OFS on or after February 17, 2009, Treasury ethics counsel received copies of such reports in advance of the candidate’s appointment to OFS in all but three cases. The new procedures outlined plans for Treasury ethics counsel to better coordinate with OFS supervisors during their reviews of confidential financial disclosure reports submitted by OFS candidates. Treasury officials said that the new coordination effort was helpful because OFS mission staff were often more familiar with the day-to-day roles and responsibilities of employees directly under their supervision. One of the tracking databases provides some evidence to support Treasury’s assertion that it routinely coordinates reviews of employees’ financial interests with OFS mission staff. Specifically, the database includes a field that tracks the dates of supervisory OFS staff reviews of confidential disclosure reports. In reviewing the database, we identified several instances in which OFS supervisors had reviewed confidential financial disclosure reports within a few days of the Treasury ethics counsel’s initial review. We found that for 42 permanent employees, OFS supervisors reviewed confidential financial disclosure reports, on average, 5 days after Treasury’s ethics counsel first received the reports. However, the supporting information is somewhat limited because the supervisory review field was incomplete for 14 of the 56 database pages we reviewed. Treasury’s ethics counsel told us that this information was absent most often because of a lag in data entry. Specifically, Treasury said that dates might be entered into the database some time after the reviews were complete because supervisory mission staff might retain the reports for extended periods to, among other things, track potential conflicts identified in the reports and help ensure that employees recuse themselves from matters in which they had a financial interest. Treasury provides various types of training to employees to help them understand conflicts of interest and ensure compliance with ethical standards of conduct. According to Treasury, this training is more rigorous for employees whose jobs have higher potential to involve financial or other conflicts. Treasury officials said that all employees receive group training at orientation and certain employees whose positions are of a more sensitive nature are provided one-on-one training with an ethics officer. The databases also support Treasury’s statement that it provided both individual and group-based ethics training to OFS staff. Specifically, we found that as of April 23, 2009, all OFS staff who completed financial disclosure reports had received at least one ethics training session and almost half had received two or more types of ethics training sessions. While one database lacked some information on specific training dates, it did provide some information on types of training provided to these individuals (such as one-on-one training with ethics officers, makeup training sessions, or group training conducted at orientation). OFS uses a variety of other measures to manage potential conflicts of interest. Federal law permits Treasury to authorize a waiver permitting an employee to hold certain financial interests if Treasury determines that holding such interests does not substantially interfere with the integrity of the individual’s performance. According to Treasury, to date, two waivers have been issued to OFS employees. One of these waivers gave a new OFS employee 90 days to divest assets held in pooled investment funds that could have presented a conflict into nonconflicting assets. In the other case, after determining that a senior OFS official’s deposits in a banking institution could present a conflict of interest to the extent that these deposits exceeded the FDIC-insured limit of $250,000, as a precautionary measure, Treasury issued a waiver to permit the individual to retain these deposit accounts. In both cases, Treasury determined that the investments involved were not likely to affect the integrity of the individual’s federal service. In addition, when reviewing financial disclosure reports, Treasury ethics counsel consulted with OFS employees on what activities they should recuse themselves from participating in during their employment with OFS because such activities could have potentially interfered with the independent and objective performance of their jobs. According to Treasury, during reviews of financial disclosure reports, OFS employees have agreed to divest themselves of certain financial assets to mitigate potential conflicts. Although Treasury does not routinely track divestments, Treasury provided some documentation demonstrating that multiple OFS employees divested assets that might have caused a conflict with their official duties. Treasury has appropriately identified potential conflicts of interests among senior-level OFS officials and has taken appropriate steps to address such issues. We reviewed 15 public financial disclosure reports submitted by OFS officials as of April 23, 2009. Seven of the reports reviewed had already been submitted to the detailees’ federal agencies during the past fiscal year, but Treasury’s ethics counsel reviewed the reports again to assess potential conflicts in the context of the employee’s OFS duties. In our review of the reports, we identified financial interests that could have conflicted with the independent and objective performance of some duties. During our consultation with Treasury’s ethics counsel, however, we found that the same interests had already been identified, and we obtained information showing that the ethics counsel had taken the appropriate steps to address them. For example, in some cases, Treasury’s ethics counsel instructed individuals to divest themselves of certain investments. In other cases, Treasury’s ethics counsel directed individuals to recuse themselves from matters involving former employers or firms that compensated them for consulting services. Since our March 2009 report, Treasury has awarded 11 new contracts and entered into four new financial agency agreements, bringing to 40 the total number of TARP financial agency agreements, contracts, and blanket purchase agreements as of June 1, 2009. Of the 11 new contracts, 4 are in support of services related to the automotive industry, 2 are for legal services related to PPIP, 1 is for legal services related to small business loans and securities, 1 is to perform credit reform modeling analysis, and 3 are for OFS facilities services. Of the 4 new financial agency agreements, 1 is for asset management services in support of the small business 3 are for asset management services in support of CPP. Since March 2009, Treasury used expedited procedures to award seven contracts using other than full and open competition based on unusual and compelling urgency. Treasury also used the General Services Administration’s Federal Supply Schedule in three instances. In most cases, Treasury solicited and received offers from multiple firms. While competition requirements do not apply to Treasury’s authority to designate financial agents, Treasury issued a general solicitation for asset manager proposals in support of CPP and received more than 200 submissions, from which it made its current three selections. Treasury has yet to decide on the extent to which it will need additional asset managers. For detailed status information on new, ongoing, and completed Treasury contracts and agreements as of June 1, 2009, see GAO-09-707SP. Treasury encourages small businesses, including minority- and women- owned businesses, to pursue procurement opportunities on TARP contracts and financial agency agreements. OFS has considered potential vendors’ efforts to utilize small businesses as part of its selection criteria on most contracts and some financial agency agreements. As of June 1, 2009, Treasury has awarded nine of its 40 prime contracts or financial agency agreements (23 percent) to small or minority- and women-owned businesses. Two of the new prime contracts awarded since our March 2009 report were awarded to small businesses for credit reform analysis and OFS facilities services, one was awarded to a small minority/women- owned business for legal support to PPIP, and two of the new financial agency agreements are with minority- and women-owned businesses for asset management services. To date, however, the majority of small or minority- and women-owned businesses participating in TARP are subcontractors with TARP prime contractors. According to OFS officials, as of June 1, 2009, 30 of 42 TARP subcontractors (71 percent) represented small or minority- and women-owned business categories, as shown in table 12. As of June 1, 2009, legal services contracts and financial agency agreements continue to account for the majority (67 percent) of services used to directly support OFS’s administration of TARP, as shown in figure 4. As of the same date, Treasury had expended $48,894,415 for actions related to contracts and agreements—a $37 million increase in contract and financial agency agreement expenses in the last 2 months alone. The largest share of the total (38 percent) was for legal services, and the second-largest share (24 percent) was for services provided by financial agents. Since our March 2009 report, Treasury has increased its fiscal year 2009 budget estimate from $175 million to $263 million to cover higher anticipated costs for OFS’s use of contractors and financial agents, interagency agreement obligations, information technology services, office rental, and other facilities costs. According to OFS budget officials, the estimated $88 million budget increase is due primarily to financial agency agreement costs for Fannie Mae and Freddie Mac, the addition of new TARP programs, and the realignment of some budget categories. Treasury provides a basic descriptive listing of information on its contracts and financial agency agreements through its TARP Web site and its monthly report to Congress pursuant to section 105(a) of the act. However, this reporting lacks the detail Congress and other interested stakeholders need to track the progress of individual contracts and agreements—such as a breakdown of obligations and/or expenses, in dollars, by each entity. As OFS’s capacity to manage and monitor TARP contracts and other agreements continues to grow, making this type of information public on a regular basis would be useful, in addition to the information Treasury already reports. Some of the principal federal banking regulators involved in activities related to TARP (Federal Reserve, FDIC, OCC, and OTS) currently use or plan to use contractors in support of activities related to the program. Officials reported that, as of June 1, 2009, the Federal Reserve was contracting with four firms to provide support for AGP, including financial evaluation and accounting services related to Federal Reserve loans made to Citigroup and Bank of America. In addition, FDIC plans to obtain future contractor support to assist with activities related to PPIP’s Legacy Loans Program. Though this program is still in development, FDIC anticipates that contractor services in support of the program may include financial advisory services, asset valuation, oversight and compliance monitoring, title assignment, trustee services, and master servicer responsibilities. OFS continues to implement its system of compliance to manage and monitor potential conflicts of interest that may arise with contractors and financial agents seeking or performing work under TARP. In response to the January 2009 TARP conflicts-of-interest interim rule, OFS received nine comments before the public comment period ended March 23, 2009. OFS anticipates that the process of developing a final rule on conflicts of interest may take several months to complete. We continue to track the actions OFS has taken to address two prior recommendations: (1) to complete the review of, and as necessary renegotiate, the four vendor conflicts-of-interest mitigation plans that predated Treasury’s interim rule to enhance specificity and conformity with the interim rule and (2) to issue guidance requiring that key communications and decisions concerning potential or actual vendor- related conflicts of interest be documented. Since March, OFS has made progress toward completing the review, and as necessary renegotiation, of four pre-existing vendor conflicts-of-interest mitigation plans. In addition, Treasury extended the period of performance for two existing legal services contracts in March 2009. Of these six required reviews, two were completed as of May 2009, resulting in updated contract language and revised mitigation plans. OFS anticipates completing all remaining reviews and any necessary renegotiations by the end of July 2009. The two contracts OFS revised now include specific language mirroring the interim rule and provide more details regarding required disclosures and certifications. The revised language also added provisions such as requirements for conflicts-of-interest training for staff working under the agreement, prohibitions on offers of future employment or gifts to Treasury requirements that conflicts-of-interest rules apply to subcontractors and consultants. One of the two contracts was revised to include more specificity in the conflicts-of-interest mitigation plan regarding steps to mitigate potential organizational and personal conflicts, codes of ethics, and gift policies. Based on our review, the revised requirements in these contracts match those in new contracts that were awarded after the interim rule was issued. OFS concurred with, and has taken initial steps to implement, the second recommendation that it issue guidance requiring that key communications and decisions concerning vendor-related conflicts of interest be documented, but it has yet to complete this task. OFS has drafted the process flows for the formal inquiry process, illustrating how OFS tracks and documents decisions concerning vendor-related conflicts of interest. OFS plans to discuss implementation of this process at an internal training of its contracting officer’s technical representatives and financial agent relationship managers on June 23, 2009. While isolating and estimating the effect of TARP programs continues to present a number of challenges, indicators of perceptions of risk in credit markets generally suggest improvement since our March 2009 report, although the cost of credit has risen in some markets. As we have noted in prior reports, if TARP is having its intended effect, a number of developments might be observed in credit and other markets over time, such as reduced risk spreads, declining borrowing costs, and more lending activity than there would have been in the absence of TARP. However, a slow recovery does not necessarily mean that TARP is failing, because it is not clear what would have happened without the programs. In particular, several market factors helping to explain slow growth in lending include weaknesses in securitization markets and the balance sheets of financial intermediaries, a decline in the demand for credit, and the reduced creditworthiness among borrowers. Nevertheless, credit market indicators we have been monitoring suggest that while some rates have increased since our March 2009 report, there has been broad improvement in interbank, mortgage, and corporate debt markets in terms of perceptions of risk (as measured by premiums over Treasury securities). In addition, empirical analysis of the interbank market, which showed signs of significant stress in 2008, suggests that CPP and other programs outside TARP that were announced in October of 2008 have resulted in a statistically significant improvement in risk spreads even when other important factors were considered. Although foreclosures continue to highlight the challenges facing the U.S. economy, total mortgage originations rose roughly 70 percent over the fourth quarter of 2008. Similarly, while the Federal Reserve data show that lending standards remain tight, our analysis of Treasury’s new loan survey indicate that the largest 21 CPP recipients extended roughly $260 billion, on average, each month in new loans to consumers and businesses in the first quarter of 2009. In our previous reports, we highlighted the rationale for CPP, CAP, TALF, and the Home Affordability Mortgage Program (HAMP) and the intended effects of these programs. Among other improvements, the TARP programs, if effective, should jointly result in the following: improvement in credit market conditions, including declining risk premiums (the difference between risky and risk-free interest rates, such as rates on U.S. Treasury securities) for interbank lending and bank debt and lower borrowing costs for business and consumers. improvement in banks’ balance sheets, enhancing lenders’ ability to borrow, raise capital, and lend to creditworthy borrowers; however, as we have discussed in previous reports, tension exists between promoting lending and improving banks’ capital position. fewer foreclosures and delinquencies than would otherwise occur in absence of TARP. improvements in asset-backed securities markets, a development that should increase the availability of new credit to consumers and businesses, lowering rates on credit card, automobile, small business, student, and other types of loans traditionally facilitated by securitization. While TARP’s activities could improve market confidence in participating banks and have other beneficial effects on credit markets, we have also noted in our previous reports that several factors will complicate efforts to measure any impact. For example, any changes attributed to TARP may well be changes that (1) would have occurred anyway; (2) can be attributed to other policy interventions, such as the actions of FDIC, the Federal Reserve, or other financial regulators; or (3) have been enhanced or counteracted by other market forces, such as the correction in housing markets and revaluation of mortgage-related assets. Consideration of market forces is particularly important when using bank lending as a measure of CPP’s and CAP’s success because it is not clear what would have happened in absence of TARP. Weaknesses in the balance sheets of financial intermediaries, a decline in the demand for credit, reduced creditworthiness among borrowers, and other market fundamentals suggest lower lending activity relative to the expansion phase of the business cycle. Similarly, nonbank financial institutions, which have accounted for a significant portion of lending activity over the past two decades, have been constrained due to weak securitization markets. Because it is unlikely that any increase in loans originated by banks would completely offset the decline in nonbank activity, the weakness in securitization markets suggests that growth in aggregate lending will be slow. Success in supporting nonbank financial institutions and revitalizing the securitization market will depend in part on the success of TALF. Lastly, because the extension of credit to less-than-creditworthy borrowers appears to have been an important factor in the current financial crisis, it is not clear that lending should return to precrisis levels. As discussed in our March 2009 report, Treasury has introduced PPIP to facilitate the purchase of legacy loans and securities. The program aims not only to reduce uncertainty about the solvency of holders of these assets but also to encourage price discovery in markets for these assets, assuming current market prices are below what they would otherwise be in a normally functioning market. The impact of PPIP will depend in particular on the pricing of the purchased assets. Sufficiently high prices will allow financial institutions to sell assets, deleverage, and improve their capital adequacy. To the extent that markets are underpricing such assets or prices are suppressed due to illiquidity, higher prices may be more reflective of the underlying value or cash flows associated with the assets (and therefore aid in price discovery). However, all other things being equal, higher prices impose certain risks on Treasury, FDIC, and the Federal Reserve if prices paid are too high, as these agencies will absorb losses beyond the equity supplied by investors. The contribution of private-sector equity capital reduces incentives to overpay for assets, depending on the proportion of equity supplied, because greater equity contributions entail greater downside risk for buyers. In addition to providing more transparent pricing to these assets, PPIP, if it is effective, should have effects broadly similar to the intended effects of CPP and CAP: improved solvency at participating institutions, reduced uncertainty about their balance sheets, and improved investor confidence, allowing these institutions to borrow and lend at lower rates and raise additional capital from the private sector. We continue to consider a number of indicators that, although imperfect, may be suggestive of TARP’s impact on credit and other markets. Improvements in these measures would indicate improving conditions, even though those changes may be influenced by general market forces and cannot be exclusively linked to any one program or action being undertaken to stabilize and improve the economy. Table 13 lists the indicators we have reported on in previous reports, as well as the changes since the March 2009 report and the changes since the announcement of CPP, the first TARP program. In general, the indicators illustrate that the cost of credit and perceptions of risk have declined in corporate debt, mortgage, and interbank markets since mid-October 2008 although the cost of credit has risen in some markets since our March 2009 report. For example, the cost of interbank credit (LIBOR) has declined by 38 basis points since our March 2009 report, and the TED spread, which captures the risk perceived in interbank markets, has declined by 57 basis points. Since the announcement of CPP, the LIBOR and TED spreads have fallen by approximately 400 basis points. Since the announcement of CPP, corporate bond spreads have declined, and there have been significant decreases of 101 and 207 basis points for high-quality (Aaa) and moderate- quality (Baa) corporate spreads, respectively, since our March 2009 report, indicating reduced risk perceptions. Although the Aaa bond market rate has increased somewhat since our March 2009 report, both Aaa and Baa bond rates have declined since the announcement of CPP, indicating an decrease in the cost of credit for businesses. Similarly, the improvement in the mortgage market is consistent across rates and spreads although rates have been rising dramatically recently. Mortgage rates were up 61 basis points since our March 2009 report largely due to significant increases over the last two weeks. However, the mortgage spread is down 53 basis points. Since the announcement of CPP the improvement in the mortgage market was consistent across rates and spreads—down 87 basis points and 74 basis points, respectively. (See our December and January reports for a more detailed description and motivation for the indicators.) Recent trends in these metrics are consistent with indicators monitored by GAO but not reported and those tracked by other researchers. For example, although not reported, the credit default swap index for the banking sector has declined significantly since March 2009. As discussed above, changes in credit market conditions may not provide conclusive evidence of TARP’s effectiveness, as other important policies, interventions, and changes in underlying economic conditions can influence these markets. To examine further whether the decline in the TED spread could be attributed in part to CPP, we conducted additional analysis using a simple econometric model to address one of the most obvious threats to validity. Because the TED spread reached extreme values leading up to the CPP announcement (over 450 basis points), it is possible there would have been declines from these peaks even in the absence of CPP simply because extreme values have a tendency to return to normal levels. However, even when we accounted for this possibility and the general state of the economy using variables such as stock market performance and the spread between long- and short-term Treasuries, we found that CPP, announced on October 14, 2008, had a statistically significant negative impact on changes in the TED spread. Even so, the associated improvement in the TED spread (or LIBOR) cannot be attributed solely to TARP because the October 14 announcement was a joint announcement that introduced other Federal Reserve and FDIC programs in addition to CPP. Moreover, the model we used is relatively simple and did not attempt to account for all of the important factors that might influence the TED spread. Omitting such variables could bias the results in unpredictable ways. (See appendix III for additional information and limitations.) We continue to monitor mortgage originations and foreclosures as potential measures of TARP’s effectiveness. As table 13 indicates, mortgage originations increased over 70 percent, from $260 billion in the fourth quarter of 2008 to $445 billion in the first quarter of 2009 (see also fig. 5). We noted in previous reports that if TARP worked as intended, we expected mortgage originations to stop declining and eventually rise. While the volume of new mortgage lending may reflect the availability of credit, it may also indicate changes in credit risk or the demand for credit. As figure 5 illustrates, mortgage applications also increased in the first quarter, principally due to refinancing. Although originations were still below the level in the first quarter of 2008, it is not clear that originations would or should return to the level seen in the period leading up to the credit market turmoil. Similarly, foreclosure data, although also influenced by general market forces like falling housing prices and job loss, should provide an indication of the effectiveness of HAMP and CPP to the extent that improved market conditions enhance the ability of creditworthy borrowers to refinance mortgages. However, it is too soon to expect material changes in this area given that HAMP was only recently implemented. As table 13 shows, the percentage of loans in foreclosure reached an unprecedented high of 3.9 percent at the end of the first quarter of 2009, up from 3.3 percent the previous quarter. The foreclosure rate on subprime loans rose to 14.3 percent from 13.7 percent (the rate for adjustable-rate subprime loans is now over 23 percent). We will provide additional information on foreclosures and general conditions in mortgage markets in future TARP-related and other reports to Congress. Our analysis of Treasury’s loan survey showed that the largest CPP recipients continued to extend loans to consumers and businesses, roughly $260 billion on average each month in 2009. Because these data are unique, we were not able to benchmark the origination levels against historical lending or seasonal patterns at these institutions. As illustrated in figure 6, new lending at the 21 largest institutions participating in CPP fell 6 percent in February and rose 27 percent in March, month over month. Although lending normally drops during a recession and lending standards for consumer and business credit remained tight, our analysis of the April 2009 release of the Federal Reserve’s loan officer survey found that aggregate new lending by these institutions in March amounted to roughly $295 billion (see table 14), or 41 percent higher than the low recorded in November 2008. Consistent with the trends in aggregate mortgage originations discussed above, total mortgage originations for the largest CPP banks rose 15 percent to roughly $117 billion. The reporting institutions generally received CPP funds on October 28, 2008, or November 14, 2008, with a few institutions receiving funds on December 31, 2008, or January 9, 2009. As we discussed in the March report, TALF support to securitization markets should, if effective, result in lower rates and increased availability of credit for the businesses and households that receive the underlying loans. The primary consumer ABS markets include ABS backed by auto loans, credit card receivables, and student loans. Although TALF is in its beginning stages, we have begun monitoring lending activity at the institutions most likely to be impacted by conditions in securitization markets. For example, because stand-alone auto finance companies are more heavily reliant on securitization than commercial banks, we noted that changes in the trends in their automobile loan rates could partially reflect the issues in securitization markets that TALF is intended to address. As figure 7 shows, the average finance company auto rate has been consistently below commercial bank auto rates. However, from August to November 2008 the average finance company rate increased significantly, rising by 132 basis points, while the average bank rate increased just slightly (13 basis points). In contrast, from November 2008 to February 2009, the finance company rate declined significantly (326 basis points) to 3.2—well below the bank rate, which fell only 13 basis points. The average rate for new automobile loans at finance companies declined another 43 basis points to 2.74 percent during March. While these declines correlate with the launching of TALF, the finance rate could also reflect the attempt by auto finance companies to attract buyers in a weak market, as well as other forces. We will continue to monitor these trends as well as data on credit card debt and othe consumer and business loan markets. Moreover, because TALF has beenexpanded to other assets, including commercial MBS, other measure s of lending activity and loan rates may become more appropriate indicators as time progresses. Treasury has continued to take steps to refine some TARP programs and finalize others. In doing so, it has taken steps to address our previous recommendations. Some areas, however, require ongoing attention. For example, Treasury has hired the asset managers that will have a role in monitoring compliance with the terms of CPP and other programs, but it is continuing to develop a comprehensive oversight program for all TARP program recipients. Consistent with our recommendation for greater disclosure of monies paid to Treasury by TARP participants, Treasury now includes dividends and interest received in its periodic reports to Congress that are also posted to the www.financialstaility.gov Web site and plans to provide dividend information by institution on the Web site. OFS has also made progress in filling key positions in most areas but some vacancies continue to be more challenging to fill. Finally, Treasury has made additional progress in improving its communication strategy, including hiring an individual who will be responsible for managing OFS’s relationships with Congress, among other duties, but continued progress in this area would further improve the transparency of the program. Appendix II provides our assessment of Treasury’s implementation of our previous recommendations. Since our March 2009 report, Treasury has hired its first asset managers to help manage its investment portfolio and help monitor compliance with limitations on dividend payments and stock repurchases. However, Treasury has yet to clearly identify the role that asset managers will have in monitoring compliance; it has only noted that the asset managers will have a limited role in the area of executive compensation oversight. While hiring these managers is an important step, Treasury has yet to develop a structured process to oversee compliance with program requirements and the act. As noted in prior reports, we will continue to monitor developments in this area, which is critical to ensuring the accountability and integrity of the program. The Federal Reserve’s completion of the stress tests for the 19 largest bank holding companies was a significant milestone for CAP. While stress test results revealed that about half of the banks needed to raise additional capital to ensure their ability to continue lending to creditworthy borrowers and maintain sufficient capital against losses, it remains unclear whether any of the institutions will have to use CAP to raise additional capital. The results of the stress test provided a rare glimpse into the condition of these institutions, but questions have been raised about the stress test assumptions, given the ongoing challenges in financial markets. Moreover, the Federal Reserve does not plan to provide any additional information on the condition of the banks over the next 18 months that could show whether the banks had met their projected performance and loss levels. The extent to which the institutions will disclose additional information is unclear. As a result, the information provided could be selective and difficult to compare across institutions, raising questions not only about transparency of SCAP but also CAP. Moreover, the Federal Reserve did not provide OFS staff with information about SCAP prior to its public release and has no plans to share ongoing information about any of the SCAP institutions that continue to be CPP or CAP participants. Without such information, OFS lacks information needed to adequately monitor these programs. Although several banks have repurchased or announced plans to repurchase their preferred shares and warrants, the regulators’ repurchase approval criteria have lacked adequate transparency. The Federal Reserve has provided criteria for the 19 largest bank holding companies, but the other regulators have not consistently provided details about how they have made repurchase determinations and how they will make future determinations. Clearly articulated and consistently applied criteria are indicative of a robust decision-making process, and without them, Treasury’s ability to help ensure consistent treatment of institutions requesting repurchase of their shares is limited. Similarly, Treasury has provided limited information about the warrant repurchase process on its www.financialstability.gov Web site. We recognize the challenges associated with valuing warrants in the absence of readily available markets for these instruments. For this reason, and because the valuation process can be assumption driven, a well-designed, fully vetted transparent process becomes critical to defusing questions about the warrant valuation process and whether the resulting prices paid by the institutions reflect the taxpayers’ best interests. While Treasury has provided some limited information about the valuation process, it has yet to provide the level of transparency at the transaction level that would begin to address such questions. Additional information, such as the institution’s initial offer and Treasury’s final valuation, would begin to address some of these issues. Treasury has taken steps toward implementing a communication strategy, such as developing a new Web site and developing a media relations position dedicated to TARP. Treasury has also included its public affairs and legislative affairs staff in regular meetings with OFS to ensure that communication and operations are better integrated. However, Treasury’s current communication strategy may not be as effective as it could be. Treasury has recognized the importance of reaching out to congressional stakeholders on a regular and proactive basis and planned to do more to ensure that all committees of jurisdiction receive regular communication about TARP. However, until this strategy is fully implemented, congressional stakeholders may not receive information in a consistent or timely manner. In addition, although Treasury has said that the new www.financialstability.gov Web site is a key component of its efforts to improve communication on TARP, it has not yet taken steps to determine whether the site is user-friendly or whether visitors to the site are finding the information they seek. Usability testing and customer satisfaction surveys are recognized best practices for improving the usefulness of Web sites. While Treasury is in the process of exploring the use of such tools, these efforts should be implemented as quickly as possible to gauge the effectiveness of its communication efforts. Treasury has continued to make progress in establishing its management infrastructure and has responded to our two most recent contracting recommendations and continued to respond to the others. In the hiring area, Treasury has continued to establish its management infrastructure, including hiring more staff. In accordance with our prior recommendation that it expeditiously hire personnel to OFS, Treasury continued to use direct-hire and various other appointments to bring a number of career staff on board quickly. Since our March 2009 report, Treasury has continued to increase the total number of OFS staff overall, including the number of permanent staff. However, continued attention to hiring remains important because some offices within OFS, such as the offices of Homeownership and Risk and Compliance, continue to have a number of vacancies that need to be filled as TARP programs become fully implemented. In the internal controls area, consistent with our previous report recommendation that Treasury update guidance available to the public on determining warrant exercise prices to be consistent with actual practices applied by OFS, Treasury updated its frequently asked questions on its Web site to clarify the process it follows for determining the prices. However, there continues to be inconsistent guidance available on the Web site for calculating the exercise prices. Treasury told us that any new CPP applicants would most likely be non-public institutions for which these guidance documents would not apply. As such, Treasury does not believe the inconsistent guidance is a significant issue and therefore does not plan on further addressing the inconsistency. If this warrant exercise price guidance is no longer needed, then we believe that Treasury should remove these guidance documents from its Web site to alleviate any inconsistent descriptions of its process pertaining to warrant exercise price calculations for public institutions. If Treasury chooses to leave the documents on its Web site, then, as we previously recommended, Treasury should make these documents consistent with respect to the warrant exercise price calculations. Treasury has continued to build a network of contractors and financial agents to support TARP administration and operations and has an opportunity to enhance transparency through its existing reporting mechanisms. Treasury issues a number of reports and uses other mechanisms, such as public announcements and its Web site, to provide information to the public. Useful details are still lacking, however, on the costs of procurement contracts and financial agency agreements, such as a breakdown obligated and expenses for each entity. These contracts and agreements are key tools OFS has used to help develop and administer its TARP programs. By not providing this information, Treasury is missing an opportunity to provide additional transparency about the cost of TARP operations. Finally, while again noting the difficulty of measuring the effect of TARP’s activities, some indicators suggest general improvements in various markets since our March 2009 report although the cost of credit has risen in some cases. Specifically, the Baa corporate bond rate and LIBOR have declined but mortgage and Aaa bond rates have risen. However, perceptions of risk in credit markets (as measured by premiums over Treasury securities) have decreased in interbank, mortgage, and corporate bond markets, while total mortgage originations have increased. Empirical analysis of the interbank market, which showed signs of significant stress in 2008, suggests that CPP and other programs outside of TARP that were announced in October 2008 resulted in a statistically significant improvement in risk spreads, even when other important factors were considered. In addition, although Federal Reserve survey data suggest that lending standards remained tight, collectively the largest CPP recipients extended roughly $260 billion on average each month in new loans to consumers and businesses in the first quarter of 2009, according to the Treasury’s loan survey. However, attributing any of these changes directly to TARP continues to be problematic because of the range of actions that have been and are being taken to address the current crisis. While these indicators may be suggestive of TARP’s ongoing impact, no single indicator or set of indicators can provide a definitive determination of the program’s impact. While the Department of the Treasury has taken actions to address our previous recommendations, we continue to identify areas that warrant ongoing attention and focus. Therefore, we recommend that Treasury take the following five actions as it continues to improve TARP and make it more accountable and transparent: Ensure that the warrant valuation process maximizes benefits to taxpayers and consider publicly disclosing additional details regarding the warrant repurchase process, such as the initial price offered by the issuing entity and Treasury’s independent valuations, to demonstrate Treasury’s attempts to maximize the benefit received for the warrants on behalf of the taxpayer. In consultation with the Chairmen of the Federal Deposit Insurance Corporation and the Federal Reserve, the Comptroller of the Currency, and the Acting Director of the Office of Thrift Supervision, ensure consideration of generally consistent criteria by the primary federal regulators when considering repurchase decisions under TARP. Fully implement a communication strategy that ensures that all key congressional stakeholders are adequately informed and kept up to date about TARP. Expedite efforts to conduct usability testing to measure the quality of users’ experiences with the financial stability Web site and measure customer satisfaction with the site, using appropriate tools such as online surveys, focus groups, and e-mail feedback forms. Explore options for providing to the public more detailed information on the costs of TARP contracts and agreements, such as a dollar breakdown of obligations and/or expenses. Finally, to help improve the transparency of CAP—in particular the stress tests results—we recommend that the Director of Supervision and Regulation of the Federal Reserve consider periodically disclosing to the public the aggregate performance of the 19 bank holding companies against the more adverse scenario forecast numbers for the duration of the 2-year forecast period and whether or not the scenario needs to be revised. At a minimum, the Federal Reserve should provide the aggregate performance data to OFS program staff for any of the 19 institutions participating in CAP or CPP. We provided a draft of this report to Treasury for review and comment. We also provided excerpts of the draft to the FDIC, Federal Reserve, OCC, and OTS. We received written comments from Treasury that are reprinted in Appendix I. The Federal Reserve provided oral comments, which we discuss later. We also received technical comments from Treasury, the Federal Reserve, and FDIC that we incorporated, as appropriate. In its written comments, Treasury described steps it had taken in the last 60 days to address the extraordinary economic challenges, including the Treasury financed restructurings of GM and Chrysler among others. Treasury also noted the progress it has made in addressing our previous recommendations. It also noted that the recommendations in this report were constructive as it implements its programs and enhances OFS’s performance. Moreover, they said several initiatives underway are consistent with our recommendations. According to Treasury, among other things, it is in the process of expanding its public disclosure about the warrant repurchase process, implementing a communication strategy that will provide all key congressional stakeholders more current information about TARP, and planning a usability test to measure satisfaction with its new Web site. We will continue to monitor Treasury’s progress in implementing these and other planned initiatives in future reports. On June 12 and 15, 2009, we received oral comments from the Senior Advisor to the Director of the Division of Banking Supervision and Regulation on excerpts of the draft pertaining to the Federal Reserve. The official expressed concern that our recommendation to consider periodically disclosing aggregate information to the public on the performance of the 19 U.S. bank holding companies against the more adverse scenario would be operationally difficult and potentially misleading. Specifically, the official said the SCAP loss estimates were developed as aggregate 2-year estimates, without attempting to forecast the quarter-to-quarter path of such losses over the 2009 to 2010 period. Further, the official expressed concern that the size and character of the bank holding companies’ on- and off-balance sheet exposures may change materially over the 2-year period and that the Federal Reserve never intended that the one-time SCAP estimates be used as a tool for measuring U.S. bank holding company performance during the 2009 to 2010 period. We understand that while this analysis would pose some operational challenges for the Federal Reserve because the exercise was intended to calculate a one-time capital buffer needed to withstand a more adverse economic scenario and that the on-and off-balance sheet exposure of the 19 institutions may change materially over time. However, given the dynamic economic environment, we see great value in periodically measuring and reporting U.S. bank holding company performance against the adverse scenario and whether the adverse scenario is more or less adverse compared against changing economic conditions. Although this would periodically require additional calculations, we believe this analysis would provide useful trend information on the aggregate health of these important institutions. As we previously stated, without such analysis, the public will not have reliable information that can be used to gauge the accuracy of the stress test projections on a more detailed basis than what has been disclosed in the SCAP papers. Further, it could counter any adverse affect of any selective reporting by individual institutions. Finally, such periodic reporting would be useful in the measurement of the effectiveness of SCAP and CAP. We are sending copies of this report to the Congressional Oversight Panel, Financial Stability Oversight Board, Special Inspector General for TARP, interested congressional committees and members, Treasury, the federal banking regulators, and others. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Richard J. Hillman at (202) 512-8678 or hillmanr@gao.gov, Thomas J. McCool at (202) 512-2642 or mccoolt@gao.gov, or Orice Williams Brown at (202) 512-8678 or williamso@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Review and renegotiate existing vendor conflict-of-interest mitigation plans, as necessary, to enhance specificity and conformity with the new interim conflicts-of-interest regulation, and take continued steps to manage and monitor conflicts of interest and enforce mitigation plans. Develop a communication strategy that includes building an understanding and support for the various components of the program. Specific actions could include hiring a communications officer, integrating communications into TARP operations, scheduling regular and ongoing contact with congressional committees and members, holding town hall meetings with the public across the country, establishing a counsel of advisers, and leveraging available technology. Require that AIG seek concessions from stakeholders, such as management, employees, and counterparties, including seeking to renegotiate existing contracts, as appropriate, as it finalizes the agreement for additional assistance. Update OFS documentation of certain internal control procedures and the guidance available to the public on determining warrant exercise prices, to be consistent with actual practices applied by OFS. Improve transparency pertaining to TARP program activities by reporting publicly the monies, such as dividends, paid to Treasury by TARP participants. Complete the review of, and as necessary renegotiate, the four existing vendor conflicts-of-interest mitigation plans to enhance specificity and conformity with the new interim conflicts-of-interest rule. Issue guidance requiring that key communications and decisions concerning potential or actual vendor- related conflicts of interest be documented. We conducted an econometric analysis to assess the impact of Capital Purchase Program (CPP) on the TED spread. Our multivariate econometric model uses a standard interrupted time series design using daily data on the TED spread. In lieu of relying on graphing and identifying trends in the data before and after the announcement, the goal of this exercise was to determine whether the large decline in the TED spread could be associated with CPP in a statistically significant way when other important variables were also considered, including a time trend and a variable thought to control for the tendency of extreme values to revert to more normal levels. To carry out the exercise as validly as possible, we conducted tests to ensure the stationarity of the variables in the model, used heteroskedasticity and autocorrelation-consistent (HAC) standard errors and conducted sensitivity analysis. The primary regressions model changes in the TED spread as a function of lagged values of changes in the term structure (spread between short- and long-term bonds), default spread (spread between lower quality and higher quality bonds), target federal funds rate, and the S&P 500, as well as a variable that indicates whether CPP was in place (starting with the announcement date). We also include a time trend, an indicator variable that indicates whether the TED spread was at an extreme value the day before (defined as 200 basis points or greater) and a counter variable that indicated the number of consecutive days, including the day in question, that the TED spread had taken on an extreme value. The latter variable was included to control for a potential “regression to the mean” effect. As a robustness check, we also ran a variation of the model using a two-step procedure where we (1) extract the predictable component from the TED spread, term structure and default risk premium and (2) use the unpredicted spreads in the regression. We also ran the model on various time periods. In all cases, we found CPP to have a statistically significant impact on the TED spread. However, it should be noted that we did not attempt to capture all potential factors that might explain movements in the TED spread, and, therefore, omitted variable bias remains a concern. Moreover, since other programs were put in place from October 2008 to February 2009, further analysis that attempts to control for these interventions would provide more definitive results. As participants have started to repay their assistance as permitted by the Emergency Economic Stabilization Act of 2008 (the act), as amended by the American Recovery and Reinvestment Act of 2009, the Department of the Treasury (Treasury) has developed standard processes for each type of security. The following provides an overview of the repurchase process for preferred shares and subordinated debt and warrants. In a repurchase, the financial institution buys back preferred stock or subordinated debt from Treasury that was issued under Treasury’s Capital Purchase Program (CPP) to stabilize the financial system. Under the original terms of CPP, financial institutions were prohibited from repurchasing such stock and debt within the first 3 years unless they completed a qualified equity offering. Under the act, as amended, Treasury must permit a financial institution to repurchase the preferred stock or subordinated debt issued to Treasury at any time, subject to Treasury’s consultation with the primary federal banking regulator. In Treasury’s public guidance (FAQs) on repurchases, it states that financial institutions should give notice of their intent to repurchase to their primary banking regulator, which will apply existing supervisory procedures to determine whether to approve the repurchase. As shown in figure 8, the process begins when Treasury and the primary federal regulator receive written notification (e-mail or letter) from the financial institution of its intent to repurchase in full or in part its preferred stock or other securities from Treasury. The primary federal regulator performs an analysis using available supervisory information and information provided by the institution to gauge its current financial condition and prospects, such as whether there has been a significant change in a financial institution’s financial condition and viability since it received CPP funds. This analysis allows the regulator to determine if the repurchase request should be approved or denied. In addition, the 19 largest U.S. bank holding companies that were subject to the stress test must also be able to demonstrate access to common equity through public issuance in the equity capital markets, and successfully issue senior unsecured debt for a term greater than 5 years and not backed by Federal Deposit Insurance Corporation (FDIC) guarantees, in amounts sufficient to demonstrate a capacity to meet funding needs independent of FDIC guarantees. According to Treasury, the consultation consists of the primary federal regulator informing Treasury of its decision to approve or deny the request via e-mail. If the federal regulator of the entity that issued the preferred stock or other securities to the Treasury indicates it has no objection to, or approves of, the repurchase, Treasury then notifies in writing the financial institution that the repurchase is in process and instructs the financial institution to contact its Treasury counsel to set up dates for closing and settlement. If the repurchase is denied, Treasury notifies the institution. All four primary federal regulators noted that their role in the repurchase process followed existing regulations and procedures for evaluating requests by any financial institution regardless of whether they participate in CPP. The Federal Reserve has established instructions for processing capital repurchase requests for CPP and other government capital programs by bank holding companies. For the 19 U.S. bank holding companies that participated in the Supervisory Capital Assessment Program, on June 1, 2009, the Federal Reserve released the criteria it planned to use to evaluate applications to repurchase Treasury’s capital investments. The Federal Reserve in consultation with the U.S. bank holding companies’ primary bank regulator and FDIC informed Treasury on June 9, 2009, that it had no objection to the repurchase of preferred shares by 9 of the SCAP bank holding companies. Also on June 9, 2009, Treasury announced that these 9 U.S. bank holding companies, and one other large institution, met the requirements for repayment and would be eligible to repay about $68 billion to Treasury. An Office of Financial Stability official noted that Treasury plays a limited role in this determination process. If a financial institution repurchases all of its senior preferred shares, it can repurchase some or all of its other equity securities held by Treasury. The treatment of warrants differs in the standard securities purchase agreements, depending on whether the firm that issues the warrants is privately held or publicly traded. For privately held institutions, Treasury immediately exercises the warrants at the time of the capital investment and receives additional preferred shares. The financial institution repurchases these warrant preferred shares after it repurchases the senior preferred shares from Treasury. Publicly traded institutions have the option to repurchase outstanding and unexercised warrants after the senior preferred shares are repurchased. Although Treasury can sell the warrants at any time, Treasury is required to notify the financial institution 30 days prior to a sale. Following a repurchase of the senior preferred shares held by Treasury, an institution can repurchase the warrants at fair market value (FMV), as defined in section 4.9 of the Securities Purchase Agreement. If the financial institution chooses not to repurchase the warrants, Treasury may liquidate the registered warrants. According to the Securities Purchase Agreement, financial institutions have 15 days from the date of a repurchase of preferred stock to give notice to Treasury of the intent to repurchase the warrants that were originally issued with the stock. If the financial institution does not wish to repurchase the outstanding warrants, Treasury may proceed with liquidating the warrants at the current market price. If the financial institution decides to repurchase the warrants, the institution’s board of directors determines the FMV, acting in good faith and relying on an opinion of a nationally recognized independent investment banking firm retained by the financial institution for such purpose and certified in a resolution to Treasury. Through the use of market quotes from market participants, financial modeling, fundamental research, and a third-party consultation, Treasury makes an independent determination of the FMV of the warrants. If Treasury does not agree with the financial institution’s determination, it may object in writing within 10 days of receipt of the financial institution’s FMV determination, and the two parties must work together to resolve any issues and agree on an FMV. If they are unable to agree on an FMV in 10 days, either party has 20 more days to invoke the appraisal procedure by delivery of written notice. Under the appraisal procedure, Treasury and the financial institution each choose an independent appraiser to determine the estimated FMV and notify each other of their choices within 10 days. If the two appraisers are unable to agree upon an FMV for the warrants within 30 days of their appointment, the appraisers have 10 additional days to select and appoint a third independent appraiser. The third appraiser then has 30 days to render its estimated FMV. The three estimated FMVs are to be averaged unless the larger of the differences between the higher FMV and middle valuations and the middle and lower valuations is more than 200 percent of the smaller difference. If the larger difference exceeds 200 percent of the smaller, the outlying valuation that triggers the exception is to be excluded and the remaining two are to be averaged. The average will become the binding FMV for Treasury and the financial institution; the financial institution will be responsible for paying the costs of the appraisal procedure. Citigroup, Inc. (Citigroup) is one of the few institutions that has participated in multiple Troubled Asset Relief Program (TARP) programs. As of June 12, 2009, it is participating in the Capital Purchase Program (CPP), the Targeted Investment Program (TIP), and the Asset Guarantee Program (AGP). Its participation in multiple programs has raised a number of questions about Citigroup’s financial condition. To analyze Citigroup’s financial condition, we compared Citigroup with three similar institutions that also received initial TARP funds through CPP in October 2008: Bank of America Corporation, JPMorgan Chase, and Wells Fargo Company. As of March 31, 2009, these four institutions were the largest U.S. bank holding companies. This appendix compares selected data on Citigroup’s financial condition from 2007 through the first quarter 2009 with that of the other three bank holding companies. Regarding net income, during all four quarters of 2008, Citigroup recorded growing losses, while the other three bank holding companies continued to record profits. By the fourth quarter of 2008, Citigroup’s quarterly loss had increased to $27 billion (see fig. 9). Since the beginning of 2007, all four of the bank holding companies experienced a decline in the market value of their equity as a percentage of their total assets (see fig. 10). However, since the beginning of 2008, Citigroup’s ratio has been the lowest of the four. We also reviewed the four bank holding companies’ debt-to-equity ratios for the same period. We calculated the debt-to-equity ratio as the holding company liabilities or debt divided by the equity shareholder funds. A higher ratio generally indicates a higher amount of financing with debt. Citigroup’s debt-to-equity ratio was significantly higher than the other three holding companies’ ratios, as shown in figure 11. From the fourth quarter 2008 through the first quarter 2009, Citigroup’s ratio increased slightly from 9.4:1 to about 9.5:1. One indicator of capital adequacy is the tier 1 risk-based capital ratio. Using this measure, before TARP funding, Citigroup’s tier 1 capital ratio was similar to that of the three other large bank holding companies (see fig. 12). In the third quarter of 2008, the capital ratios of the four bank holding companies ranged from 8.9 percent to 7.6 percent, with Citigroup reporting a tier 1 risk-based capital ratio of 8.2 percent. A different measure of capital adequacy is the tier 1 leverage ratio. Using this measure, Citigroup had the lowest ratio for the entire period compared with the other three bank holding companies. Citigroup’s tier 1 leverage ratio ranged from a low of about 4 percent in the fourth quarter of 2007 to a high of just over 6.6 percent in the first quarter of 2009. In the third quarter of 2008 and before TARP funding, Bank of America, JPMorgan Chase, and Wells Fargo reported their tier 1 leverage ratio as 5.5 percent, 7.2 percent, and 7.5 percent, respectively, while Citigroup reported a tier 1 leverage ratio of 4.7 percent as show in figure 13. In addition to capital, a bank holding company has a cushion against losses in its “allowance for loan and lease losses” (ALLL), which must be maintained by the bank holding company to cover expected losses in its loan and lease portfolio. For Citigroup and the other three companies, we examined the data on assets that already reflected repayment problems (“nonaccrual loans” plus “other real estate owned”) and compared this to the companies’ tier 1 capital plus ALLL. The data for the first quarter 2007 through the first quarter 2009 are shown in figure 14. Throughout this period, Citigroup’s assets with repayment problems as a percentage of this cushion was consistently higher than that of the other three bank holding companies. In addition to the contacts named above, Nikki Clowers, Gary Engel, and William Woods (Lead Directors); Cheryl Clark, Lawrence Evans Jr., Barbara Keller, Carolyn Kirby, Kay Kuhlman, Karen Tremba, and Katherine Trimble (Lead Assistant Directors); and Marianne Anderson, Noah Bleicher, Benjamin Bolitzer, Angela Burriesci, Emily Chalmers, Michael Derr, Rachel DeMarcus, M’Baye Diagne, Abe Dymond, Patrick Dynes, Nima Edwards, Nancy Eibeck, Karin Fangman, Ryan Gottschall, Brenna Guarneros, Heather Halliwell, Michael Hoffman, Joe Hunter, Tyrone Hutchins, Elizabeth Jimenez, Jamila Jones Kennedy, Jason Kirwan, Christopher Klisch, Steven Koons, Rick Krashevski, John Krump, Jim Lager, Rob Lee, John Lord, Matthew McDonald, Sarah McGrath, Susan Michal-Smith, Marc Molino, Tim Mooney, Jill Namaane, Joseph O’Neill, Ken Patton, Josephine Perez, Omyra Ramsingh, Mary Reich, Rebecca Riklin, LaSonya Roberts, Susan Sawtelle, Chris Schmitt, Raymond Sendejas, Jeremy Swartz, Maria Soriano, Cynthia Taylor, John Treanor, and Jason Wildhagen made important contributions to this report. Auto Industry: Summary of Government Efforts and Automakers’ Restructuring to Date. GAO-09-553. Washington, D.C.: April 23, 2009. Small Business Administration’s Implementation of Administrative Provisions in the American Recovery and Reinvesment Act. GAO-09-507R. Washington, D.C.: April 16, 2009. Troubled Asset Relief Program: March 2009 Status of Efforts to Address Transparency and Accountability Issues. GAO-09-504. Washington, D.C.: March 31, 2009. Troubled Asset Relief Program: Capital Purchase Program Transactions for the Period October 28, 2008 through March 20, 2009 and Information on Financial Agency Agreements, Contracts, and Blanket Purchase Agreements Awarded as of March 13, 2009. GAO-09-522SP. Washington, D.C.: March 31, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-539T. Washington, D.C.: March 31, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-484T. Washington, D.C.: March 19, 2009. Federal Financial Assistance: Preliminary Observations on Assistance Provided to AIG. GAO-09-490T. Washington, D.C.: March 18, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-474T. Washington, D.C.: March, 11, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-417T. Washington, D.C.: February 24, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-359T. Washington, D.C.: February 5, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-296. Washington, D.C.: January 30, 2009. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 22, 2009. Troubled Asset Relief Program: Additional Actions Needed to Better Ensure Integrity, Accountability, and Transparency. GAO-09-266T. Washington, D.C.: December 10, 2008. Auto Industry: A Framework for Considering Federal Financial Assistance. GAO-09-247T. Washington, D.C.: December, 5, 2008. Auto Industry: A Framework for Considering Federal Financial Assistance. GAO-09-242T. Washington, D.C.: December 4, 2008. Troubled Asset Relief Program: Status of Efforts to Address Defaults and Foreclosures on Home Mortgages. GAO-09-231T. Washington, D.C.: December 4, 2008. Troubled Asset Relief Program: Additional Actions Needed to Better Ensure Integrity, Accountability, and Transparency. GAO-09-161. Washington, D.C.: December 2, 2008.
GAO's fifth report on the Troubled Asset Relief Program (TARP) follows up on prior recommendations. It also reviews (1) activities that had been initiated or completed under TARP as of June 12, 2009; (2) the Department of the Treasury's Office of Financial Stability's (OFS) hiring efforts and use of contractors; and (3) TARP performance indicators. To do this, GAO reviewed signed agreements and other relevant documentation and met with officials from OFS, contractors, and financial regulators. Treasury continued to operationalize its more recent programs, including the Capital Assistance Program (CAP). As part of this program, the Federal Reserve led the stress tests of the largest 19 U.S. bank holding companies, which revealed that about half needed to raise additional capital to keep them strongly capitalized and lending even if economic conditions worsen. Whether any of the institutions will have to participate in CAP has yet to be determined. While the Federal Reserve disclosed the stress test results, it has no plans to disclose information about the 19 institutions going forward. What information, if any, is disclosed will be left to the discretion of the affected institutions raising a number of concerns including potentially inconsistent or only selected information being disclosed. Moreover, the Federal Reserve had not developed a mechanism to share information with OFS about the ongoing condition of the 19 bank holding companies that continue to participate in TARP programs. According to Treasury, its Financial Stability Plan has provided a basis for its communication strategy. Treasury plans to more regularly communicate with congressional committees of jurisdiction about TARP. However, until this strategy is fully implemented, all congressional stakeholders will not be receiving information in a consistent or timely manner. A key component of the communication strategy is the new www.financialstability.gov Web site. While a goal of the new site is to provide the public with a more user friendly format, Treasury has not yet measured the public's satisfaction with the site. OFS has made progress in establishing its management infrastructure. Continued attention to hiring remains important because some offices within OFS, including the Office of the Chief Risk and Compliance Officer, continue to have a number of vacancies that will need to be filled as TARP programs are fully implemented. Treasury has also continued to build a network of contractors and financial agents to support TARP administration and operations. These contracts and agreements are key tools OFS has used to help develop and administer its TARP programs. Treasury has provided information to the public on procurement contracts and financial agency agreements, but has not included a breakdown of cost data by each entity. As a result, Treasury is missing an opportunity to provide additional transparency about TARP operations.
HHS and others have promoted electronic prescribing as one way to improve the quality of health care that beneficiaries receive and as one way to reduce costs. Health care costs are typically paid for by health care payers, such as CMS in the Medicare Program. In traditional, or paper- based, prescribing, health care providers that are licensed to issue prescriptions for drugs (e.g., physicians or physician assistants in some states) write a prescription, and the beneficiary takes that prescription to a dispenser (e.g., pharmacy) to be filled. In contrast, electronic prescribing consists of a licensed health care provider using a computer or hand-held device to write and transmit a prescription directly to the dispenser. Before doing so, the health care provider can request the beneficiary’s eligibility, formulary, benefits, and medication history. This information can be used to improve quality and reduce costs. For example, a health care provider can use this information to avoid potentially adverse drug events such as drug-to-drug or drug-to-allergy interactions and to prescribe less-expensive medications, such as lower-cost generic drugs. Figure 1 illustrates the flow of information during the electronic prescribing process and identifies areas in this process that may result in improvements in the quality of health care provided to beneficiaries and reductions in costs to health care payers. Appendix II provides information from studies measuring whether or to what extent electronic prescribing improves quality or reduces costs. The types of Medicare providers eligible to earn incentive payments or who may be subject to penalties in the EHR and Electronic Prescribing Programs were established in statute, and although they overlap they are not identical. Specifically, only physicians, who are the largest population among each program’s eligible providers, can earn incentive payments or be subject to penalties from both programs, but they cannot receive incentive payments or be subject to penalties from both programs during the same year. Other health care providers, such as nurse practitioners and physician assistants, are only eligible to receive incentive payments or are subject to penalties from the Electronic Prescribing Program. (See fig. 2.) There is some overlap in the time frames for incentive payments and penalties for the Electronic Prescribing and EHR Programs. Incentive payments for the Electronic Prescribing Program are available from 2009 through 2013. Incentive payments for the EHR Program begin in 2011 and may be available until 2016, depending on which calendar year the provider initially receives an incentive payment from the program. Incentive payments for both programs are determined by multiplying the provider’s total allowed charges for provider services covered by Medicare Part B for the year by the incentive percent authorized by statute. However, in the EHR Program the year in which the provider first adopts and meaningfully uses the EHR technology determines the maximum annual incentive payment a provider can earn and the total number of years incentive payments are available. For both programs, incentive payments are disbursed after providers demonstrate that they met the applicable program requirements. Figure 3 displays the timeline and maximum incentive payments and penalties for both programs. (App. IV provides additional detail on the annual and total incentive payments an eligible provider could receive from the EHR Program based on the initial year the provider receives an incentive payment.) By law, providers cannot receive an incentive payment for both programs during the same year. Penalties for the Electronic Prescribing Program and the EHR Program may be automatically applied to providers that fail to meet the programs’ requirements. Penalties for the Electronic Prescribing Program begin in 2012 and end after 2014. Penalties for the EHR Program begin in 2015, and there is no statutory end-point provided for when the penalties will end. Since the Electronic Prescribing Program ends after 2014 and penalties for the EHR Program do not begin until 2015, providers will not receive penalties from both programs during the same year. However, providers who are subject to penalties from the Electronic Prescribing Program in 2014 and who are subject to penalties from the EHR Program in 2015 will face a higher penalty from the EHR Program—2 percent instead of 1 percent. Similar to the incentive payments, penalties for not adopting a program’s technologies are also calculated by multiplying the provider’s total allowed charges for provider services covered by Medicare Part B by the penalty percent authorized by statute. Penalties will be assessed by reducing the reimbursement that the provider would ordinarily receive for furnishing Part B services by the applicable penalty percentage. The amount of incentive payments or penalties eligible providers may receive depends on the year in which the provider chooses to begin participating in—that is, meeting the requirements of—either or, if eligible, both programs. In general, the earlier a provider begins participating in the program, the more incentive payments the provider will earn and the fewer penalties the provider will be assessed. Figure 4 below presents three scenarios of participation in the Electronic Prescribing and EHR Programs between 2009 and 2018. In each scenario, we assume that the provider is eligible for both programs and has $24,000 in total allowed Medicare Part B charges each year. CMS will develop the reporting requirements that providers will have to meet for the EHR Program in three stages. To date, CMS has only developed the reporting requirements that eligible providers will have to meet to receive incentive payments for the first stage, which will apply to providers first obtaining incentive payments from the EHR Program from 2011 through 2014. By the end of 2011, CMS expects to develop reporting requirements for receiving incentives in the second stage and, by the end of 2013, develop reporting requirements for receiving incentives in the third stage. CMS has stated that it may include information on the reporting requirements that eligible providers must meet to avoid penalties at the same time it issues regulations describing the third-stage requirements. CMS intends to make the reporting requirements more stringent over time as EHR technology and providers’ use of that technology becomes more sophisticated. To receive an incentive payment for the EHR Program, eligible providers must meet or exceed a total of 20 reporting requirements established by CMS. Of the 20 reporting requirements, 15 are mandatory, and providers must choose an additional 5 from a menu of 10 other reporting requirements. The reporting requirements encompass a variety of activities related to the delivery of health care to encourage providers to capture the following types of information in their EHR systems: patient demographics and clinical conditions, use of clinical decision support, and the coordination of care across health care settings. See app. V for a complete list of the stage-one reporting requirements for receiving incentive payments. The reporting requirements that CMS develops for the second and third stages of the EHR Program may be influenced by the Patient Protection and Affordable Care Act of 2010 (PPACA), which directed CMS to develop a plan to integrate the reporting requirements used in the EHR Program with the information that CMS collects from eligible providers in the Physician Quality Reporting System (PQRS). Similar to the EHR and Electronic Prescribing Programs, CMS, as directed by Congress, implemented PQRS to provide incentive payments to eligible providers who satisfactorily reported data on various quality measures and impose penalties on those providers who did not. Specifically, PPACA directed CMS to develop an integration plan by January 1, 2012, that would identify reporting requirements that could be used to demonstrate meaningful use for the EHR Program and also be used to demonstrate quality of care provided to individuals for PQRS. To determine which providers should receive the Electronic Prescribing Program’s incentive payments, CMS analyzes information reported by providers on their Medicare Part B claims, which are used to submit charges for covered services. To determine which providers are subject to penalties, which begin in 2012, CMS will also analyze information reported by providers on their Part B claims, but the requirements for avoiding penalties are different than those for obtaining incentive payments. In 2009, CMS paid incentive payments to about 8 percent of certain Medicare providers—that is, of the over 597,000 Medicare providers who had at least one applicable visit during 2009—and another 7 percent of those same Medicare providers participated in the Electronic Prescribing Program but did not receive incentive payments. Incentive payments for the Electronic Prescribing Program are available from 2009 through 2013, and to determine which providers meet the program’s requirements and should receive the payments, CMS analyzes information reported by providers on their Part B claims. Specifically, for 2009, CMS first examined 2009 Part B claims to determine whether, after each applicable patient visit, providers marked any one of three electronic prescribing reporting codes used to report information on the adoption and use of electronic prescribing systems. For 2009, the three electronic prescribing reporting codes were: the provider had a qualified electronic prescribing system and used it to generate all prescriptions during the visit; the provider had a qualified electronic prescribing system but did not use it to generate one or more prescriptions during the visit for one of the following reasons: the patient requested a paper prescription, the pharmacy could not receive an electronic transmission, or the prescription was for a narcotic or other controlled substance and could therefore not be electronically prescribed; and the provider had a qualified electronic prescribing system but did not generate any prescriptions during the visit. By submitting any one of the three electronic prescribing reporting codes to CMS, providers attested that they met the program’s technology requirement by adopting a qualified electronic prescribing system and are eligible to earn incentive payments from the program. Second, CMS analyzed the 2009 Part B claims to determine which of the providers who submitted the electronic prescribing reporting codes also met or exceeded both components of the following reporting requirement: the provider submitted one of the three electronic prescribing reporting codes at least 50 percent of the time that the provider had an applicable visit; and at least 10 percent of the provider’s total allowed Medicare Part B charges for the year were from the services designated as applicable patient visits. If the provider met or exceeded the reporting requirement, CMS gave the provider an incentive payment for 2009, which the agency calculated as 2 percent of the provider’s total allowed Medicare Part B charges for the year and by applying a small adjustment factor. For 2010, to increase the adoption of electronic prescribing technology, CMS made some changes to the Electronic Prescribing Program’s reporting requirement that providers had to meet in order to receive an incentive payment. CMS eliminated the three electronic prescribing reporting codes for 2009 and replaced them with a single code for providers to submit to CMS. The new code indicates that after each applicable visit the provider generated and transmitted at least one prescription during the visit using a qualified electronic prescribing system. The agency stated that it believed that this change would simplify reporting. CMS also changed the first portion of the reporting requirement related to how frequently providers must submit the new electronic prescribing code in order to receive an incentive payment. Instead of requiring that providers submit the electronic prescribing reporting code at least 50 percent of the time that they had an applicable visit—the requirement in 2009—CMS required that an individual provider submit the new electronic prescribing reporting code for at least 25 visits. CMS noted that the agency believes that meeting the 2010 reporting requirement is achievable by a majority of eligible providers. If providers participated in the Electronic Prescribing Program as a group practice containing 200 or more providers—a new option in 2010—the practice had to submit the electronic prescribing reporting code for at least 2,500 applicable visits before all of the providers in the practice could receive incentive payments. When it proposed the change to at least 25 and at least 2,500 visits for individual providers and group practices, respectively, CMS noted that it assumed that once a provider has invested in an electronic prescribing system, integrated the use of that system into the practice’s work flows, and used that system to some extent, the provider is likely to continue to use the electronic prescribing system for most of the prescriptions generated. The other component of the reporting requirement remained unchanged from 2009: at least 10 percent of the provider’s or practice’s total allowed Medicare Part B charges for the year were from the services designated as applicable visits. Finally, as an individual or as part of a group practice, providers could report the electronic prescribing code on their Part B claims, as they did in 2009, or they could do so using one of two alternative reporting mechanisms CMS created. CMS has described how it will determine which providers should receive incentive payments for 2011, but the agency has not yet indicated how it will determine which providers should receive incentive payments for 2012 or 2013. CMS will determine which providers meet the program’s requirements and should receive an incentive payment in 2011 generally using the same methods the agency used in 2010. However, one important change CMS made for 2011—one that is consistent with changes the agency is making to PQRS—is that CMS expanded the definition of group practice to include practices containing 2 through 199 individuals and will require those group practices to report the electronic prescribing code for a minimum of between 75 and 1,875 applicable visits, depending on the size of the group practice. The requirement for group practices of 200 or more providers is unchanged; those practices must report the code for at least 2,500 applicable visits. From 2012 through 2014, the Electronic Prescribing Program will assess penalties on individual providers and group practices that do not adopt and use electronic prescribing. To avoid these penalties in 2012, individual providers and group practices will have to meet certain reporting requirements. Individual providers will have to submit the electronic prescribing reporting code on their Part B claims for at least 10 applicable visits between January 1, 2011, and June 30, 2011. However, CMS will not penalize certain individuals in 2012 if they do not prescribe or do so infrequently. In addition, both individual providers and groups that practice in rural areas or areas with a limited number of pharmacies that accept electronic transmissions will be exempt from penalties. The reporting requirement for individuals and the exemption criteria are consistent with the agency’s statement that it does not want to penalize providers with low prescribing volumes. Group practices will have to submit the electronic prescribing reporting code on their Part B claims the same number of times required to receive incentive payments in 2011, but they must do so within the 6-month period from January 1, 2011, through June 30, 2011. For example, group practices containing 200 or more providers will have to submit the electronic prescribing reporting code at least 2,500 times from January 1, 2011, through June 30, 2011. CMS has noted that it did not think that group practices would be disadvantaged by having to meet the reporting requirement in a 6-month period to avoid the penalty in 2012 rather than in a 12-month period to earn an incentive in 2011 because the agency requires group practices to submit the electronic prescribing reporting code fewer times on average to earn an incentive payment than it requires for individual providers to submit to earn an incentive payment. CMS has not yet established all the requirements for providers to avoid penalties in 2013 or 2014. However, for 2013, CMS has indicated that it will not penalize individual providers or group practices that year if they reported the electronic prescribing code the minimum number of times required to qualify for incentive payments in 2011. Additionally, CMS indicated that it may publish an alternative reporting requirement that providers could meet to avoid penalties in 2013. A CMS official that we interviewed told us that the agency could, for example, require individual providers to submit the electronic prescribing reporting code at least 10 times between January 1, 2012, and June 30, 2012, in order to avoid penalties in 2013. CMS is exploring an alternative to using electronic prescribing code submissions to determine which providers should receive incentive payments or penalties. As a part of CMS’s Medicare Part D, which provides outpatient prescription drug benefits for Medicare beneficiaries, CMS has required that Part D plan sponsors submit additional data on the claims they send to Medicare for reimbursement. CMS officials believe that Medicare Part D data could be used at some point instead of the electronic prescribing reporting code to determine which providers should receive incentive payments. However, CMS officials have concerns about the reliability of data from Part D claims, and note that these concerns should be resolved before the data can be used. CMS does not have specific plans or a time frame for implementing such a change. CMS paid Electronic Prescribing Program incentive payments for 2009 to about 8 percent (about 47,500) of the over 597,000 Medicare providers who had at least one applicable visit during 2009. Each of these approximately 47,500 providers received incentive payments equal to 2 percent of their total allowable Medicare Part B charges in 2009, with payments totaling approximately $148 million. The mean payment was about $3,120, the median payment was about $1,700, and the five highest payments were between about $54,500 and $67,500. CMS disbursed these payments to providers for 2009 in September and October 2010. CMS officials expect that the number of Medicare providers reporting the electronic prescribing reporting code in 2010 will increase over 2009 and noted that lowering the reporting requirement for 2010 to submitting the applicable electronic prescribing reporting code for at least 25 visits may increase the number of providers receiving incentive payments. CMS officials also told us that the penalties, which do not begin until 2012, might have a bigger effect on participation than the incentive payments. For the 2009 Electronic Prescribing Program, the percentage of Medicare providers who received incentive payments and the average incentive payment varied by state. (See fig. 5 and fig. 6.) Although Minnesota and Wisconsin had the largest share of providers receiving incentive payments at about 17 and 15 percent, respectively, providers in those two states also received the lowest mean incentive payment at about $740 and $1,500, respectively. Alaska and North Dakota had the smallest share of providers receiving incentive payments at about 2 percent each. Providers in Florida and South Carolina had the highest mean incentive payments at about $5,800 and $4,700, respectively. According to a report prepared for CMS about the 2009 Electronic Prescribing Program, the physician specialties with the largest number of providers that earned incentive payments were family practice and internal medicine, and the nonphysician specialties with the largest number of providers that earned incentive payments were nurse practitioners and physician assistants. About 87,500 Medicare providers—approximately 15 percent of Medicare providers who had at least one applicable visit during 2009—participated in the program in 2009 by reporting the electronic prescribing reporting codes to CMS. However, about 40,000 of those participating providers— approximately 7 percent of Medicare providers who had at least one applicable visit during 2009—did not receive incentive payments because they did not meet or exceed both components of the reporting requirement. (See fig. 7.) Specifically, these providers (a) submitted the electronic prescribing reporting codes less than 50 percent of the time that they had an applicable visit, (b) had less than 10 percent of their total allowed Medicare Part B charges for the year from the services designated as applicable visits, or (c) both (a) and (b) occurred. The vast majority of the about 40,000 Medicare providers that participated in the program but did not receive incentive payments submitted the electronic prescribing escribing codes less than 50 percent of the time they had an applicable visit. codes less than 50 percent of the time they had an applicable visit. We compared the electronic prescribing–related technology and reporting requirements in the EHR Program with the requirements in the Electronic Prescribing Program. The EHR Program provides incentives from 2011 to 2016 and introduces penalties beginning in 2015, while the Electronic Prescribing Program provides incentives from 2009 to 2013 and introduces penalties beginning in 2012. In comparing the programs’ requirements, we found some similarities but also areas where the requirements of the programs are not consistent. Technology requirement. Both the EHR and Electronic Prescribing Programs require eligible providers to adopt and use technology that meets certain requirements. The EHR Program requires providers to adopt certified EHR technology and the Electronic Prescribing Program requires providers to adopt qualified electronic prescribing systems. (For more details, see fig. 8.) Certified EHR systems and qualified electronic prescribing systems must be able to perform similar electronic prescribing–related activities. For example, both types of systems must be able to generate and transmit prescriptions electronically, check for potential drug and allergy interactions, and provide formulary information. The technology that providers must adopt and use for the EHR Program must pass a certification process, which is used to designate a technology as having met the program’s technology requirements. For the EHR Program, HHS’s ONC, through the work of several advisory committees, established a set of standards and specifications for EHR technology and then created a program that will certify EHR technology for use in the EHR Program based upon those standards and specifications. According to ONC’s Web site, the certification process will ensure that the EHR technology that providers adopt and use has the technological capabilities necessary for providers to obtain incentive payments or avoid penalties from the EHR Program. Further, the agency notes that certifying EHR technology to these standards enhances the interoperability of health information technology—that is, the ability of different systems or components to exchange information and to use the information that has been exchanged. EHRs that conform to interoperability standards allow health information to be created, managed, and consulted by authorized health care providers across more than one health care organization, thus providing patients and their caregivers the necessary information required for optimal care. The EHR Program’s certification process is designed to produce a list of certified EHR systems and certified EHR modules, which ONC has made available to the public on its Web site. Accordingly, this information should allow providers to identify and adopt systems that meet the EHR Program’s technological requirements. A module is a component of an EHR system that meets at least one of the certification criteria established by ONC. Individual EHR modules can be certified and integrated with other certified EHR modules to form a complete, certified EHR system. At the time of our review, technologies certified for use in the EHR Program—that is, complete EHR systems or combinations of modules that collectively can perform the capabilities that constitute a qualified electronic prescribing system—appeared to also meet the Electronic Prescribing Program’s technological requirements. Although according to ONC officials, certified EHR technology is not required to provide information on lower-cost alternatives—which is a component of the Electronic Prescribing Program’s technology requirement—CMS has indicated that an electronic prescribing system that does not conform to that component of the Electronic Prescribing Program’s technology requirement would still meet the definition of a qualified system in 2011 and until this function is more widely available in the marketplace. Although providers seeking incentive payments or trying to avoid penalties from the Electronic Prescribing Program must adopt and use qualified electronic prescribing systems, according to a CMS official the Electronic Prescribing Program does not have a process like the EHR Program’s to identify and certify which electronic prescribing systems meet the requirements of a qualified system. As a result, providers may not be certain which systems meet the program’s technological requirement. Reporting requirements. Both the EHR Program and Electronic Prescribing Program require eligible providers to report certain information about their electronic prescribing activities to CMS in order to receive incentive payments, which began in 2009 for the Electronic Prescribing Program and began in 2011 for the EHR Program. (See fig. 9 for a summary of the two programs’ electronic prescribing–related reporting requirements.) However, we also found that the electronic prescribing–related reporting requirements in the EHR Program are more rigorous. Providers seeking incentive payments from the EHR Program have at least five reporting requirements related to electronic prescribing, while providers in the Electronic Prescribing Program h Moreover, the EHR Program requires only one reporting requirement. providers to report more-detailed information—namely, information on their use of various electronic prescribing–related technological capabilities—a requirement that should increase their use of these capabilities. Additionally, while CMS has established reporting ave requirements providers must meet in order to avoid the penalties under the Electronic Prescribing Program that begin in 2012, CMS has not yetidentified what providers must report in order to avoid penalties under the EHR Program, but plans to do so in future rulemakings. We also found that the two programs’ reporting requirements are not consistent because they make certain Medicare providers subject to both programs’ reporting requirements during the same year. Specifically, physicians—the largest population among each program’s eligible providers—may choose to participate in the EHR Program in 2011 because the potential incentive payment will likely be higher under that program than under the Electronic Prescribing Program in 2011. However, to avoid the penalty assessed by the Electronic Prescribing Program in 2012, CMS will require physicians to meet the Electronic Prescribing Program’s reporting requirement in 2011, even if they elect to participate in the EHR Program in 2011. Public comments on the agency’s proposed requirements for the 2011 Electronic Prescribing Program included the concern that providers are burdened by having to submit electronic prescribing data more than once. In response, CMS stated that it will study possible methods of aligning the two programs and will include this information in the integration plan it is already required to develop by January 1, 2012, to integrate the reporting requirements in the EHR Program and PQRS, CMS’s quality measures program. However, if CMS adheres to this schedule, the agency will not be able to remove the reporting burden placed on physicians subject to penalties from the Electronic Prescribing Program in 2013, given that the requirements for avoiding penalties in 2013 would likely be proposed in July 2011 and finalized in November 2011. If CMS includes possible methods of aligning the two programs in the integration plan, any action to propose and finalize requirements will take place sometime after January 1, 2012, well beyond the date for making changes to the program in 2013. In technical comments provided on a draft of this report, HHS noted that it plans to include possible methods of aligning the two programs for the 2012 program year (and possibly for the 2013 program year) in rulemaking during 2011. Both the EHR Program and Electronic Prescribing Program require providers seeking incentive payments to attest that they have met the programs’ reporting requirements. In the EHR Program, providers will submit the results of their performance on each of the reporting requirements once per program year, while providers in the Electronic Prescribing Program attest that they adopted and used a qualified electronic prescribing system by reporting the electronic prescribing code to CMS. At least with reference to the EHR Program, CMS has acknowledged that attestation may create a potential for fraud and abuse and noted that the agency is developing an audit strategy to address this risk. CMS officials from the Office of E-Health Standards and Services told us they plan to make guidance on this strategy available by May 2011. In the case of the Electronic Prescribing Program, an official from CMS’s Office of Clinical Standards and Quality, which administers that program, told us that the agency did not audit electronic prescribing codes submitted by providers for 2009 and does not have plans to develop an audit strategy for the program. However, this official did tell us that CMS reserves the right to audit any program participant. Health information technology, such as electronic prescribing, has the potential to improve the quality of care received by patients and also reduce costs for the health care system. To help encourage the adoption of such technologies among Medicare providers, Congress first established the Electronic Prescribing Program and then the EHR Program, both of which provide incentive payments to eligible providers that adopt and use the appropriate health information technologies and impose penalties on those eligible providers that fail to do so. Despite both programs having a goal to expand the adoption and use of health information technologies by health providers, and in particular, physicians—the largest and only group of providers eligible to earn incentive payments in both programs—we found inconsistencies in the requirements. We believe these inconsistencies may limit the programs’ effectiveness in encouraging the use of health information technologies. First, we found that because the Electronic Prescribing Program lacks a certification process like that established for the EHR Program, physicians and other health care providers who want to obtain incentive payments or avoid penalties from the former program have no assurance that the systems they invest in will meet that program’s technology requirements. In contrast, physicians who invest in certified EHR systems can be assured that in doing so they would meet the current requirements of both programs. In addition, physicians that invest in certified EHR modules integrated together to perform the electronic prescribing–related capabilities could also be assured that they meet the current requirements of the Electronic Prescribing Program and that the adopted technology could later be integrated with other certified modules to form a complete, certified EHR system. This inconsistency between the programs has the potential to create uncertainty among physicians as to what technology they should adopt, because although the Electronic Prescribing Program ends after 2014, the EHR Program continues; encouraging physicians to adopt certified electronic prescribing technology now may also help facilitate their later transition between the programs. Nonphysician health care providers who are not eligible to earn incentive payments from the EHR Program could adopt certified technology and in so doing could have assurance that the electronic prescribing technology they invest in meets the Electronic Prescribing Program’s technology requirements. Second, we also found that the two programs have established separate reporting requirements related to electronic prescribing, requiring some physicians who elect to report to the EHR Program to report to both programs in 2011 and potentially requiring physicians to report to both programs through 2014, when penalties for the Electronic Prescribing Program end. CMS recognizes that this duplication places additional burden on physicians, and we believe this duplication could affect the decision of physicians to adopt and use health information technology. However, CMS is still in the process of studying possible ways to address this duplication, and if the agency wants to eliminate the burden for providers in 2012, it would need to do so during its 2011 rulemaking. In addition, CMS has not been consistent in the steps it has taken to ensure the appropriate use of these programs’ resources. Namely, CMS plans to establish an audit program for the EHR Program—under which the maximum incentive payment for a provider will generally not exceed $18,000 per year—to address potential fraud and abuse that might arise from the use of self- attestations, but CMS does not have plans to develop a similar approach in the Electronic Prescribing Program, under which CMS paid providers up to approximately $67,500 for 2009. The Electronic Prescribing Program began before the EHR Program, so CMS has already had the opportunity to encounter and learn from challenges in implementation. For example, in the first year of the Electronic Prescribing Program, only about 8 percent of providers received incentive payments, and CMS changed some of the program’s requirements in the second year to encourage greater adoption and use of electronic prescribing technology. For the EHR Program, it is too soon to know how many providers will adopt EHR systems. However, given that the electronic prescribing–related reporting requirements in the EHR Program are more rigorous than the reporting requirement in the Electronic Prescribing Program, CMS may find that it needs to modify the EHR Program requirements to better encourage the adoption and use of EHR systems. Because implementation of the Electronic Prescribing Program preceded the EHR Program, CMS has an opportunity to use the experiences gained in implementing the Electronic Prescribing Program to inform its implementation of the EHR Program in order to determine how to best encourage the adoption and use of health information technology among Medicare providers. One approach could be to incorporate these experiences into the integration plan the agency is already required to develop by January 1, 2012, to integrate the reporting requirements in the EHR Program and PQRS. To help improve the effectiveness of the Electronic Prescribing and EHR Programs to encourage the adoption of health information technologies among Medicare providers, the Administrator of CMS should take the following three actions: Encourage physicians and other health care providers in the Electronic Prescribing Program to adopt certified electronic prescribing technology. Expedite efforts to remove the overlap in reporting requirements for physicians who may be eligible for incentive payments or subject to penalties under both the Electronic Prescribing and EHR Programs by, for example, aligning the reporting requirements so that successfully qualifying for incentive payments or for avoiding penalties under the EHR Program would likewise result in meeting the requirements for the Electronic Prescribing Program. Identify factors that helped or hindered implementation of the Electronic Prescribing Program to help support the ongoing implementation of the EHR Program. CMS could include consideration of such factors in the integration plan that the agency is required to develop by January 1, 2012. To help ensure that Electronic Prescribing Program resources are used appropriately, the Administrator of CMS should develop a risk-based strategy to audit a sample of providers who received incentive payments from the Electronic Prescribing Program to help ensure that providers who receive incentive payments meet that program’s requirements. A risk- based strategy could, for example, focus on those providers who received larger incentive payments. We obtained written comments on our draft report from HHS on behalf of CMS, which are reprinted in appendix VI. CMS agreed in full with two recommendations, agreed in principle with one recommendation, and disagreed with a fourth recommendation. CMS disagreed with our first recommendation that the agency direct providers in the Electronic Prescribing Program to use technology certified as an EHR system or module(s). While CMS said that it concurred with the notion that eligible providers should be able to use certified EHR systems for the Electronic Prescribing Program, it did not agree that it should direct eligible providers to use prescribing technology that has been certified as an EHR system. CMS said that doing so could result in Electronic Prescribing Program participants having to replace their qualified electronic prescribing systems with systems certified under the EHR Program. We do not recommend that CMS direct those providers who are already participating in the Electronic Prescribing Program to replace their current systems with certified systems. On the contrary, the intent of our recommendation is to have CMS encourage providers in the Electronic Prescribing Program who have not yet adopted electronic prescribing systems, or who plan on upgrading their existing systems, to choose systems that have already been certified through the EHR Program’s certification process. We continue to assert our recommendation because, as we noted in our draft report, this certification process identifies a list of available systems that meet the certification requirements and provides assurance that the technology physicians and other health care providers adopt would meet the technology requirements of the Electronic Prescribing Program. Additionally, the physicians who later participate in the EHR Program could be assured that the technology also meets the requirements in the EHR Program. In our draft report, we noted that there is no comparable process in the Electronic Prescribing Program, and as a result, providers have no assurance that the systems they invest in for the EHR Program will meet that program’s technology requirements. Given that the Electronic Prescribing Program ends after 2014 while the EHR Program will continue, encouraging providers to adopt certified electronic prescribing technology now may also help facilitate physicians’ transition between the programs. We have clarified the recommendation to state that CMS should encourage physicians and other health care providers in the Electronic Prescribing Program to adopt certified electronic prescribing technology. CMS agreed with our second recommendation that it expedite efforts to remove the overlap in reporting requirements for physicians eligible for both programs, and noted that it plans to address this overlap in rulemaking during 2011, where applicable. We support CMS’s efforts to expeditiously remove the overlap in the reporting requirements as we recommended. CMS agreed with our third recommendation that it would be helpful for the agency to identify factors that helped or hindered implementation of the Electronic Prescribing Program to help support the ongoing implementation of the EHR Program. While CMS identified factors that may be affecting implementation of electronic prescribing, other factors that may have broader applicability to the implementation of the EHR Program could include the effect of penalties on technology adoption, measuring compliance with program requirements, and validating self- reported attestations. CMS agreed in principle with our fourth recommendation that CMS develop a risk-based strategy to audit a sample of providers who received incentive payments from the Electronic Prescribing Program. In response CMS said that it agrees that an audit of a sample of providers may be needed, however, it disagreed that such a strategy should necessarily focus on eligible providers who received large incentive payments, noting that such an audit process, if implemented, could select providers at random. As we recommended, we believe that an audit strategy should be implemented for this program. We recommended a risk-based audit strategy because although many providers received modest incentive payments in 2009, some providers received payments at least three times as high as the maximum annual incentive payment in the EHR Program. However, if implemented by CMS, a random audit would be consistent with the intent of our recommendation. CMS also noted that because it is considering using Part D data in the future to determine which providers should receive incentive payments for this program, use of these data could also alleviate the need for an audit. However, as we noted in our draft report, CMS officials raised several concerns—concerns echoed in its comments on our draft report—about the reliability of Part D data to determine which providers receive incentive payments. As we reported, CMS officials told us that these data reliability concerns should be resolved before Part D data can be used to determine which providers should receive incentive payments for this program. HHS has also provided technical comments, which we incorporated as appropriate. We also provided excerpts of our report to the VA, Blue Cross Blue Shield of Massachusetts, CVS Caremark, the Florida Agency for Health Care Administration, and organizations that participated in the Southeastern Michigan ePrescribing Initiative, which provided technical comments that we incorporated as appropriate. We are sending copies of this report to the Secretary of HHS, the Administrator of CMS, and the National Coordinator for Health Information Technology in HHS and interested congressional committees. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have questions about this report, please contact me at (202) 512-7114 or at kohnl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. This appendix addresses congressional interest in how others have measured whether or to what extent electronic prescribing improves quality or reduces cost. For example, the Medicare Improvements for Patients and Providers Act of 2008 (MIPPA) directed us to report on information related to reductions in avoidable medical errors and estimated savings to Medicare resulting from the use of electronic prescribing. To address these issues, we obtained information from organizations about research they conducted, funded, or participated in that measured the effects of electronic prescribing on quality, cost, or both. Specifically, we obtained information from the following organizations: Blue Cross Blue Shield of Massachusetts, CVS Caremark, the Florida Agency for Health Care Administration, and the Southeastern Michigan ePrescribing Initiative. In addition, we reviewed 29 published studies that measured the effects of electronic prescribing on quality, cost, or both. Our information collection, review of published studies, and summaries contained in this appendix focused on specific aspects of quality and cost that we believed were most similar to the policy goals underlying the development of the Electronic Prescribing Program and the Electronic Health Records (EHR) Program. Quality. We included studies that reported findings related to beneficiary quality, such as reductions in avoidable medical errors. Cost. We included studies that reported findings related to savings to health care payers, which are those parties generally responsible for paying claims for health care services, because we believed they would be the most applicable to determining the effects of electronic prescribing on costs for Medicare. We did not review studies that estimated potential savings for providers, such as savings associated with reductions in time spent writing prescriptions or resolving questions about prescriptions. The studies evaluated the effects of a variety of different types of electronic prescribing technology, such as stand-alone electronic prescribing systems and EHR systems that include electronic prescribing– related functions. According to the Healthcare Information and Management Systems Society (HIMSS), EHR systems also typically include information such as patient demographics, progress notes, problems, medications, vital signs, past medical history, immunizations, laboratory data, and radiology reports. Additionally, computerized physician order entry (CPOE) systems (also referred to as computerized provider order entry systems or computerized prescriber order entry systems) allow for electronic ordering of medications and may include other functions, such as ordering laboratory procedures and referrals. Hospitals may employ CPOE systems as part of a strategy to reduce medication errors. Some organizations and published studies evaluated the effects of electronic prescribing systems that had clinical decision support (CDS) capabilities, which can include checks for allergies, drug– drug interactions, overly high doses, clinical conditions, and other patient- specific dose checking, and can provide access to information on patient medical histories, pharmacy eligibility, and formulary and benefits. It is important to note that the electronic prescribing systems evaluated by the organizations we obtained information from and published studies we reviewed may have had technical capabilities that differ from the technological requirements in the Electronic Prescribing Program or the EHR Program. The studies utilized a variety of different methodologies, including the following: (1) pre-post methodologies, which compare dimensions of quality or cost before and after the implementation of electronic prescribing systems or CPOE systems; (2) comparison methodologies, which are used to compare dimensions of quality, cost, or both between a control group (i.e., one that does not electronically prescribe) and an intervention group (i.e., one that does electronically prescribe); and (3) cost simulations and cost-benefit analyses that projected the costs and savings of implementing electronic prescribing systems. Some studies identified compared a population of providers that electronically prescribed to a population that did not (e.g., handwrote prescriptions). For example, some studies identified a population of providers who had access to electronic prescribing systems and compared them to a population of other providers who did not have access to electronic prescribing systems, while other studies identified prescriptions before CPOE implementation and compared those prescriptions to prescriptions transmitted after CPOE implementation. Other studies only looked at populations of providers known to be electronic prescribers. Other studies were designed to evaluate the effect of advanced features of the electronic prescribing system. For example, one study by Steel et al. was designed to compare medication ordering behavior when no alert was triggered by the CPOE system to ordering behavior after alerts were triggered. The organizations we interviewed and published studies we reviewed examined a variety of different outcomes in order to evaluate the effect on quality, cost, or both. Examples of the outcomes measured to evaluate the effect of electronic prescribing on health care quality include the following: medication order changes resulting from information provided by the electronic prescribing system, such as alerts for potentially inappropriate medications or formulary information, or changes resulting from problems with the quality of the prescription such as errors identified by the electronic prescribing system related to dosage, directions, or illegibility; changes in potential or actual adverse drug events (ADE); and provider satisfaction that the electronic prescribing system was improving safety. Examples of the outcomes measured to evaluate the effect of electronic prescribing on cost include the following: drug costs or other outcomes that have cost implications, such as formulary compliance or generic utilization; and follow-up health care costs resulting from reductions in adverse drug events. In terms of health care quality, some studies found differences in medication error rates when electronic prescribing was used. For example, a study conducted by Weingart et al. and funded by Blue Cross Blue Shield of Massachusetts estimated that medication safety alerts prevented an estimated 402 ADEs (49 serious or life threatening, 125 significant, and 228 minor) and that alerts that resulted in physicians canceling or changing the medication order may have prevented deaths in 3 cases, permanent disability in 14 cases, and temporary disability in 31 cases. Another study by Devine et al. reported that rates of errors in prescriptions declined from 18.2 percent before to 8.2 percent after implementation of a CPOE system. However, some studies found no significant differences in medication error rates before and after the implementation of electronic prescribing systems. Some of the evaluations that focused on prescription drug costs showed savings when electronic prescribing systems were used. For example, a cost-benefit analysis conducted by Byrne et al. estimated that the use of Veterans Health Information Systems and Technology Architecture (VistA), which includes electronic prescribing and CDS capabilities of the Department of Veterans Affairs (VA) health system electronic health records, contributed to a cumulative $4.64 billion in value due to the prevention of unnecessary hospitalizations and outpatient visits resulting from prevented ADEs. In this study, the total net value of the VA’s investments in the VistA components modeled was estimated to exceed $3.09 billion. A study by McMullin et al. of an electronic prescribing system that provided patient formulary information shifted prescriber behavior from selecting drugs from eight high-cost therapeutic groups to less- expensive alternatives. However, a study by Ross et al. found no significant difference in formulary compliance between electronic prescribers (83.2 percent) and paper prescribers (82.8 percent). Of the studies we reviewed in which the electronic prescribing systems were reported to have CDS capabilities—such as drug–drug, drug–allergy alerts, or drug–formulary checks—most reported health care quality or cost effects. For example, a study by DesRoches et al. reported that providers who adopted EHR with electronic prescribing decision support capabilities averted potentially dangerous drug–drug interactions. One study by Galanter et al. found that the likelihood of contraindicated drugs being administered to patients of inadequate kidney function decreased by 42 percent after electronic prescribing CDS alerts were implemented. Ko et al. surveyed providers and found that the majority viewed drug–drug interaction alerts as increasing their potential to more safely prescribe medications. Another study by Kaushal et al. attributes the implementation of a CPOE with CDS as leading to an estimated $28.5 million in savings— $12.9 million from decreased adverse drug events and $6 million from decreased drug costs—however, the study also estimated that the cost to develop, implement, and operate the CPOE system was $11.8 million. Summaries of Evaluations Obtained from Organizations Blue Cross Blue Shield of Massachusetts Beginning in 2003, Blue Cross Blue Shield of Massachusetts contracted with software vendors to provide electronic prescribing software, which included CDS, free of charge to high-volume prescribers in their provider network. Blue Cross Blue Shield of Massachusetts continues to sponsor a limited number of electronic prescribing software licenses free of charge. As of September 2010, Blue Cross Blue Shield of Massachusetts estimated that 60 percent of its network physicians were electronically prescribing. A pre-post study comparing 1,932 Blue Cross Blue Shield of Massachusetts’s providers that were using an electronic prescribing device to the providers in the network that were not electronically prescribing (control group). The preintervention period was calendar year 2003 and the postintervention period was 2006. Whether the prescriber used an electronic prescribing device, as determined from data obtained from Blue Cross Blue Shield of Massachusetts’s pharmacy benefits manager. (1) Prescribing patterns by drug tier. (2) Savings in drug costs as a result of different prescribing patterns. (1) Prescribers who used an electronic prescribing device prescribed more generic and on-formulary prescriptions. (2) Prescribers saved Blue Cross Blue Shield of Massachusetts 5 percent on drug costs relative to those prescribers that did not use an electronic prescribing device. Blue Cross Blue Shield of Massachusetts noted that some of the individuals in the control group may have been electronically prescribing but they assumed in the study that they were not because of the absence of data. Study #2: Fischer, M.A., C. Vogeli, M. Stedman, T. Ferris, M.A. Brookhart, and J.S. Weissman. “Effect of Electronic Prescribing With Formulary Decision Support on Medication Use and Cost.” Archives of Internal Medicine, vol. 168, no. 22. (2008): 2433-39. Blue Cross Blue Shield of Massachusetts provided pharmacy claims data used by the researchers in a pre-post study of the implementation of electronic prescribing software with formulary decision support. The study consisted of an intervention group of 1,198 prescribers who wrote at least one electronic prescription, and a control group of 34,453 prescribers who did not electronically prescribe. Claims data were collected for 18 months—6 months before the intervention (October 2003 through March 2004) and 12 months postintervention (April 2004 through March 2005)—and data on electronic prescriptions were collected in the 12 month postintervention period. Whether the prescriber wrote at least one electronic prescription, as captured by the electronic prescribing system. (1) The change in the proportion of prescriptions for three formulary tiers before and after electronic prescribing was implemented; and (2) the potential savings associated with this change. (1) Electronic prescribing led to a 3.3 percent increase in Tier 1 prescribing—that is, those medications with the lowest copayment. (2) On the basis of average costs, the study estimated that implementation of electronic prescribing software with formulary decision support could lead to a savings of $845,000 per 100,000 patients. Study #3: Weingart, S.N., B. Simchowitz, H. Padolsky, T. Isaac, A.C. Seger, M. Massagli, R.B. Davis, and J.S. Weissman. “An Empirical Model to Estimate the Potential Impact of Medication Safety Alerts on Patient Safety, Health Care Utilization, and Cost in Ambulatory Care.” Archives of Internal Medicine, vol. 169, no. 16. (2009): 1465-73. Blue Cross Blue Shield of Massachusetts funded and provided some data for a study that estimated the quality improvement and savings associated with medication safety alerts. The study examined 1,833,254 prescriptions written using a commercial electronic prescribing system by 2,321 clinicians for 60,352 patients. During the study period (January through June 2006), 279,476 drug–drug interaction alerts were generated. For each drug–drug interaction, expert panelists examined whether it might result in an adverse drug event and the severity of that event. The study used published sources and payer data to estimate the costs to third-party payers associated with different types of health care services due to adverse drug events. All prescriptions were generated from the electronic prescribing system. The company that developed the electronic prescribing system provided researchers information on all drug–drug interactions generated and data on the prescribers’ action on receiving the alert. (1) The likelihood and severity of the potential ADE that the alert prevented, and (2) cost savings estimated from reduced health care utilization. (1) The study estimated that medication safety alerts prevented an estimated 402 adverse drug events (49 serious or life threatening, 125 significant, and 228 minor). Alerts that physicians “accepted,” meaning the physician either cancelled the prescription or changed to an alternative medication, may have prevented deaths in 3 cases, permanent disability in 14 cases, and temporary disability in 31 cases. (2) Due to lower utilization of health care services the study estimated annual savings to be $402,619. Beginning in 2000, CVS Caremark made electronic prescribing available through its proprietary iScribe system to interested providers by download from a Web site. In late 2004, CVS Caremark supported electronic prescribing by providing software, hardware, installation, training, and service to providers on behalf of health care payers. Study #1: Hutchins, D.S., M. Lewis, R. Velazquez, and J. Berger. “E-Prescribing Reduces Beers Prescribing Among the Elderly.” CVS Caremark, May 22, 2007. A control group study of 383,855 prescription claims written for 14,557 persons over 65 years of age between April 2002 and June 2005 by over 3,700 providers, 70 of whom implemented an electronic prescribing tool that alerted them to the prescribing of “Beers List” medications to patients over 65 years of age. Whether the prescription was dispensed before or after a provider adopted the electronic prescribing tool. Whether use of the specific electronic prescribing tool had an effect on the prescribing of potentially inappropriate drugs from the Beers List to the elderly. Use of the specific electronic prescribing tool that provided alerts specific to Beers List medications can reduce prescribing of those medications among the elderly. Study #2: Hutchins, D.S., J.N. Liberman, J. Berger, S. Jan, M. M. Johnson. “The Impact of an Electronic Prescribing Solution on the Selection and Prescribing of Cost-Effective Therapeutic Options.” CVS Caremark, 2009. A pre-post control group study of over 9 million claims in seven drug classes prescribed by one of over 29,000 providers (about 250 of which used the electronic prescribing tool) that were filled between July 2002 and December 2005. Whether the provider used an electronic prescribing tool. Whether the use of an electronic prescribing system has an effect on prescribing low-cost generic and mail-delivered drugs. Across multiple drug classes, study reported a link between use of electronic prescribing systems and a greater likelihood that generic drugs were prescribed and that they were dispensed through mail order, both of which likely lower overall costs. The Florida Agency for Health Care Administration The Florida Agency for Health Care Administration provided Medicaid providers, at no charge, access to a CDS tool called EMPOWERx, which allows for electronic prescribing and includes the following capabilities: provides comprehensive medication histories, alerts providers to drug–drug and drug–allergy interactions, and provides formulary information. A comparison of the costs and savings for 1,000 Medicaid providers in the state in the EMPOWERx personal digital assistant program to the total population of Medicaid providers in the state. Whether or not the provider was in the EMPOWERx personal digital assistant program. (1) The average cost per patient for all prescriptions. (2) The estimated savings for prescriptions written by providers in the EMPOWERx personal digital assistant program, based on the difference between costs for providers in the two groups and the number of patients associated with the EMPOWERx personal digital assistant program providers. (3) The estimated savings for the 1,000 Medicaid providers in the EMPOWERx personal digital assistant program based on information collected about alerts those providers received about drug interactions in response to a medication order, assumptions about avoidable hospitalizations, and assumptions about hospitalization costs. In the fourth quarter of 2009 (1) average costs per patient for all prescriptions were about $28 to $30 lower for the providers in the EMPOWERx personal digital assistant program; (2) the cost differences between the two groups represents estimated savings of approximately $5.5 million; and (3) by assuming that 5 percent of the 12,480 high- or very-high-severity drug interactions would have led to hospitalizations and that hospitalizations resulting from preventable drug interactions are associated with an average increased cost of $4,685 per incident, the study estimated that the state Medicaid program saved approximately $2.9 million. The Southeastern Michigan ePrescribing Initiative, a collaborative effort of employers, health plans, pharmacy benefit managers, physician groups, and others, was launched in 2005 to speed the adoption of electronic prescribing. Some of the studies that resulted from this collaboration are summarized below. Study #1: An official with Medco described a study it conducted. Medco is a pharmacy benefit manager and member of the collaborative. A comparison study of a group of 1,165 physicians who electronically prescribed to Medco’s mail-order drug program and 1,000 physicians that did not. Data were collected in the second quarter of 2008. Providers were included in the electronic prescribing group if they had sent at least 20 prescriptions electronically to Medco’s mail-order drug program during the study time period. Providers were included in the nonelectronic prescribing group if they had not met this criterion and provided services in the same zip codes as the providers in the electronic prescribing group. The average cost per prescription per group for retail and mail order prescriptions, which was calculated by dividing total costs (identified through claims data) for each category and group by the number of prescriptions for each category and group. Providers in the electronic prescribing group saved an average of $2.11 per retail prescription and $7.44 per mail-order prescription compared to the group that did not electronically prescribe. The Medco official noted that the findings were not tested for significance or subjected to other more-rigorous validations. It is possible that providers in the group that did not electronically prescribe were electronic prescribing, just not to Medco’s mail order service drug program. In addition, while the providers in each group were from the same geographic service areas, Medco did not examine the types of patients served by the providers, so it is possible that the groups were serving different patient populations. Study #2: An official described a study conducted by HaldyMcIntosh, under the direction of the Southeastern Michigan ePrescribing Initiative project manager, Point-of-Care Partners. A telephone survey of 500 providers participating in the collaborative that responded to the survey, conducted in the fourth quarter of 2007. Only providers that were electronically prescribing were surveyed. Providers’ perceptions of the effect of electronic prescribing on quality. Nearly 70 percent of respondents highly agreed that electronic prescribing improves quality of care; almost 75 percent highly agreed that electronic prescribing improves patient safety; approximately 70 percent were very satisfied with the ease of identifying drug-related interactions; and more than 60 percent reported that they changed a prescription in response to a safety alert at least once. Study #3: An official with the Health Alliance Plan described a study conducted by Henry Ford Medical Group and the Health Alliance Plan that looked at generic utilization. A comparison study conducted in 2005 of a group of 24 physicians who electronically prescribed from eight practice sites and 26 physicians from eight comparable practice sites that did not. Whether the practice site had converted to electronic prescribing. Rate of generic prescribing using pharmacy claims data and associated savings. Facilities with access to an electronic prescribing system had a 1.25 percent larger increase in their rate of generic prescribing compared with sites that did not have access to an electronic prescribing system. The study estimated that the health plan can save $800,000 per year for each 1 percentage point improvement in the rate of generic prescribing. Study #4: An official with the Health Alliance Plan described a study conducted by Henry Ford Medical Group and the Health Alliance Plan that looked at the savings associated with adverse drug events. A cost estimate conducted in 2006 of the savings associated with decreases in ADEs. Whether a prescription was changed based on an alert from the electronic prescribing system, identified from internal data sources. Estimated savings in (1) avoidable hospitalizations and (2) avoidable emergency room admissions, due to the decrease in ADEs. (1) By assuming that 2 percent of hospitalizations are attributable to ADEs, that 33 percent of those are avoidable due to use of the electronic prescribing system, and that $7,000 is saved per avoidable hospitalization, the study estimated that $441,000 was saved in 2007. (2) By assuming that 1 percent of emergency room visits are attributable to ADEs, that 33 percent of those are avoidable due to use of the electronic prescribing system, and that $500 is saved per avoidable emergency room visit, the study estimated that $99,000 was saved in 2007. Study #5: An official with the Health Alliance Plan described a study it conducted that identified patients taking contraindicated prescription drug combinations. A file review of pharmacy and medical claims for about 200,000 patients before implementation of electronic prescribing (in 2004) and after implementation of electronic prescribing (in 2007) to identify patients that were prescribed contraindicated drug combinations. The study identified claims before and after implementation of electronic prescribing. The rate of patients taking contraindicated drug combinations. The study reported a 24 percent decrease in the incidence of patients with generally contraindicated medications and a 48 percent decrease in patients taking medications contraindicated for pregnancy 1 year after the implementation of electronic prescribing. Study #6: An official with the Health Alliance Plan described a survey conducted by Henry Ford Medical Group and the Health Alliance Plan. A 2006 survey about electronic prescribing attitudes. About 100 physicians in the Henry Ford Medical Group responded to the survey. Only physicians who were electronically prescribing were included in the survey. A variety of questions related to electronic prescribing attitudes, some of which focused on physician attitudes regarding the effect of electronic prescribing on safety. Various findings reported including the following percentages of respondents who “strongly agreed” or “somewhat agreed”: 84.6 percent of respondents reported that electronic prescribing has improved the practice of medicine in their clinics; 77.2 percent and 74.8 percent reported that electronic prescribing improves the safety of the care and the quality of the care, respectively, provided to their patients; 66.7 percent reported that the drug–drug warnings were helpful, 80.5 percent reported that the drug–allergy warnings were helpful, and 68.3 percent reported that the formulary warnings were helpful. Summaries of Evaluations Obtained from Literature Review Byrne, C.M., L.M. Mercincavage, E.C. Pan, A.G. Vincent, D.S. Johnston, and B. Middleton. “The Value from Investments in Health Information Technology at the U.S. Department of Veterans Affairs.” Health Affairs, vol. 29, no. 4 (2010): 629-638. A comparison study of the VA health system and the private-sector health systems on information technology spending, adoption, and quality of care. The study also conducts a cost-benefit analysis to estimate the financial value of key components of the VA’s VistA. Whether or not the health system surveyed had adopted health information technology and whether the health information technology system had certain capabilities as defined by six frameworks in relevant literature and internal VA and publicly available documents. (1) The information technology–related quality of care quantified using previously collected quality measures from the VA that could be compared to measures available for the private sector for 2004 to 2007. (2) Cost-benefit analysis that estimates the costs and effects of the core components of the VA VistA system from 2001 to 2007. (1) The VA was found to have higher performance on preventive care process measures from 2004 to 2007 relative to the private sector. The VA averaged about 15 percentage points higher than the private sector on preventive care for patients with diabetes and 17 percentage points higher for patients with diabetes who have well-controlled cholesterol. (2) The gross value of the investment in VistA applications was projected to be $7.16 billion. Of the gross value, the researchers estimated that cumulative reductions in unnecessary care attributable to VistA in preventing ADE- related hospitalizations and outpatient visits was valued at $4.64 billion, or 65 percent of the total estimated value. The VA system electronically captures and reports allergies and adverse reactions, inpatient and outpatient medications, medication orders, and includes CDS such as clinical reminders and order checking. Cunningham, T.R., E.S. Geller, and S.W. Clarke. “Impact of Electronic Prescribing in a Hospital Setting: A Process-Focused Evaluation.” International Journal of Medical Informatics, vol. 77, no. 8 (2008): 546-554. A pre-post study reviewing the medication orders of two different hospitals, a control hospital that did not implement a CPOE system and an intervention hospital that did at each of three different phases of the study—a 4-week baseline phase, a 3-week pilot phase, and 5-week post-CPOE implementation phase. At the control hospital, 247 handwritten orders were reviewed from the baseline phase, 279 handwritten orders from the pilot phase, and 453 handwritten orders from the post-CPOE implementation phase. At the intervention hospital, 201 handwritten orders were reviewed from the baseline period, 283 electronically submitted orders were reviewed from the pilot phase, and 587 orders (276 handwritten and 311 submitted electronically) were reviewed from the post-CPOE implementation phase. Whether or not the physicians’ medication orders were handwritten or submitted electronically in the three different phases of the study, as identified from the files of previously processed medication orders stored in the pharmacy departments of each hospital. (1) Rates of compliance with hospital medication protocols (such as recording date, time, drug name, or dosage) by examining behavioral checklists used to collect information on each prescription written; and (2) time it took for a patient to receive antibiotics, as recorded in the hospital medication ordering database. (1) Medication orders submitted electronically at the intervention hospital were compliant with hospital medication protocols 79.9 percent of the time, compared to a 62.9 percent compliance rate for paper orders written at the same hospital, and a 64.2 percent compliance rate for paper orders written at the control hospital. (2) At the intervention hospital, the average amount of time from the medication order until the first dose of antibiotics was administered was shorter for orders submitted through the CPOE system (185.0 minutes) than paper orders (326.2 minutes). The CPOE had CDS but the specific features of the CDS system are not discussed. DesRoches, C.M., E.G. Campbell, S.R. Rao, K. Donelan, T.G. Ferris, A. Jha, R. Kaushal, D.E. Levy, S. Rosenbaum, A.E. Shields, and D. Blumenthal. “Electronic Health Records in Ambulatory Care—A National Survey of Physicians.” New England Journal of Medicine, vol. 359, no. 1 (2008): 50-60. A survey of 2,758 physicians conducted between September 2007 and March 2008. Whether or not physicians reported on the survey that they adopted an EHR system, including whether the EHR system was a “fully functional” or “basic” EHR. The study defined a “fully functional” EHR as one that allows physicians to record patients’ clinical and demographic data, view and manage results of laboratory tests and imaging, manage order entry (including electronic prescriptions), and support clinical decisions (including warnings about drug interactions or contraindications). In the study, the principal differences between “fully functional” and “basic” EHRs were the absence of certain order-entry capabilities and CDS in a basic system. The survey asked respondent a variety of questions related to EHR adoption, including questions related to quality of care. Findings reported by the study included the following: of the respondents with fully functional EHR systems, 80 percent reported averting a potentially dangerous drug allergic reaction and 71 percent of respondents reported averting a potentially dangerous drug interaction compared to 66 percent and 54 percent of respondents with basic EHR systems. DesRoches, C.M., E.G. Campbell, C. Vogeli, J. Xheng, S.R. Rao, A.E. Shields, K. Donelan, S. Rosenbaum, S.J. Bristol, and A.K. Jha. “Electronic Health Records’ Limited Successes Suggest More Targeted Uses.” Health Affairs, vol. 29, no. 4 (2010): 639-646. The researchers created a survey and surveyed 4,840 acute-care general medical and surgical hospitals from March to September 2008 that were members of the American Hospital Association. The researchers linked the information gathered in their survey to information from three other data sources. Whether the hospital had a comprehensive EHR, defined as an EHR with 24 clinical functions used across all major clinical units in the hospital, a basic EHR system, defined as a system with 10 key functions in at least one major clinical unit in the hospital, or no EHR system. (1) Performance on quality metrics based on data released from the Hospital Quality Alliance for three clinical conditions—acute myocardial infarction, congestive heart failure, and pneumonia—and prevention of surgical complications, and (2) efficiency, as measured by the hospitals’ risk-adjusted length of stay, risk-adjusted 30-day readmission rates, and risk-adjusted inpatient costs, which were determined using two sources of data, the Medicare Inpatient Impact File and the Area Resource File. (1) No relationships were found between EHR adoption and quality process measures for acute myocardial infarction, congestive heart failure, pneumonia, or 30- day risk-standardized mortality of these conditions. Hospitals with EHR had somewhat better performance on the prevention of surgical complications measures than hospitals without EHR (93.7 percent for hospitals with a comprehensive EHR, 93.3 percent for hospitals with a basic EHR, and 92.0 percent for those without EHR). (2) No relationships between the level of EHR adoption and overall risk-adjusted length of stay were found. Hospitals with comprehensive EHRs had similar rates of readmissions within 30 days of hospital discharge compared to hospitals with basic or no EHRs. The researchers found that hospitals with such systems had comparable inpatient costs to hospitals without them. Pneumonia patients in hospitals with a comprehensive EHR had a length of stay that was, on average, 0.5 days shorter than those of patients in hospitals without EHR. In this article, CDS consisted of clinical reminders and clinical practice guidelines and was associated with marginally better performance on each of the Hospital Quality Alliance quality metrics. Devine, E.B., R.N. Hansen, J.L. Wilson-Norton, N.M. Lawless, A.W. Fisk, D.K. Blough, D.P. Martin, and S.D. Sullivan. “The Impact of Computerized Provider Order Entry on Medication Errors in a Multispecialty Group Practice.” Journal of the American Medical Informatics Association, vol. 17, no. 1 (2010): 78-84. A pre-post study compared prescriptions written at a multilocation clinic before and after the implementation of a CPOE system. For the pre-CPOE implementation period between March 1, 2002, and July 15, 2002, for one clinic and between January 2, 2004, and March 4, 2004, for other clinics, 5,016 prescriptions were evaluated. For the post-CPOE implementation period between January 14, 2004, and July 13, 2004, for one clinic and between July 1, 2005, and April 26, 2006, for other clinics, 5,153 prescriptions were evaluated. Whether the prescription was written before or after the implementation of the CPOE system. (1) Rates, (2) types, and (3) severity of errors in prescriptions written before CPOE system implementation compared to prescriptions submitted electronically after the implementation of the CPOE system. (1) Rates of errors in prescriptions declined from 18.2 percent before to 8.2 percent after implementation of the CPOE system, and the adjusted odds of an error occurring postimplementation of CPOE system were 70 percent lower than preimplementation. (2) There were reductions in the adjusted odds of the following error types: illegibility (97 percent), inappropriate abbreviations (94 percent), information missing (85 percent), wrong strength (81 percent), drug–disease interaction (79 percent), and drug–drug errors (76 percent). (3) Electronic prescribing led to a 57 percent decrease in the odds of an error occurring that did not cause harm. There was a 49 percent reduction in the odds of errors occurring that caused harm. The authors note that this reduction was not significant and that the small number of errors in this category could have caused this result to not be significant. The CPOE had limited CDS alerts that included basic dosing guidance and duplicate therapy checks. Feldstein, A.C., D.H. Smith, N. Perrin, X. Yang, S.R. Simon, M. Krall, D.F. Sittig, D. Ditmer, R. Platt, and S.B. Soumerai. “Reducing Warfarin Medication Interactions: An Interrupted Time Series Evaluation.” Archives of Internal Medicine, vol. 166, no. 9 (2006): 1009- 1015. A pre-post study of 239 primary care providers with 9,910 patients taking Warfarin at 15 primary care clinics that implemented medication interaction alerts for the drug Warfarin into their electronic medical records with computerized order entry and decision support. The baseline period was from January 2000 through November 2002 and the postintervention period was from April 2003 through August 2004. The presence of electronic medical record alerts for selected coprescriptions of medications that interact with Warfarin. When Warfarin and a targeted interacting medication were coprescribed, an alert would appear, whereupon the clinician had to click “OK” to continue prescribing the interacting medication or prescribe a different drug. The interacting prescription rate, defined as the number of coprescriptions of Warfarin-interacting medications per 10,000 Warfarin users per month. At baseline, about a third of patients had an interacting prescription. Coinciding with the implementation of the alerts, the estimated Warfarin-interacting medication prescription rate decreased from 3,294 interacting prescriptions per 10,000 Warfarin users to 2,804 interacting prescriptions per 10,000 Warfarin users, resulting in a 14.9 percent relative reduction. The electronic medical record had CDS in the form of medication alerts. Galanter, W.L., R.J. Didomenico, and A. Polikaitis. “A Trial of Automated Decision Support Alerts for Contraindicated Medications Using Computerized Physician Order Entry.” Journal of the American Medical Informatics Association, vol. 12, no. 3 (2005): 269-274. A comparison, pre-post study of a CPOE alert designed to appear when a clinician attempted to order potentially contraindicated drugs for patients with decreased kidney function through the CPOE. The study was conducted with 233 patients over an 18 month period (4-month pre-CPOE alert period and 14-month post-CPOE alert period). Whether or not CPOE alerts were generated when contraindicated drugs were ordered electronically. (1) The likelihood of a contraindicated drug being administered before and after implementation of the CPOE alerts, as collected from electronic medical records. (2) Alert compliance. (1) Likelihood of a patient receiving at least one dose of the contraindicated medication decreased from 89 percent in the prealert period to 47 percent in the postalert period. (2) Patient gender was associated with alert compliance rate, with compliance in female patients lower than that in male patients. Alert compliance also decreased as kidney function increased. Housestaff with more than 1 year of residency training had a higher compliance rate than those with less than 1 year of training. Gandhi, T.K., S.N. Weingart, A.C. Seger, J. Borus, E. Burdick, E.G. Poon, L.L. Leape, and D.W. Bates. “Outpatient Prescribing Errors and the Impact of Computerized Prescribing.” Journal of General Internal Medicine, vol. 20, no. 9 (2005): 837-841. A comparison study of 1,879 prescriptions reviewed by a pharmacist and submitted at four adult primary care practices, two of which utilized electronic prescribing and two that did not, over a period of 7 months (September 1999 to March 2000). Whether prescriptions were written at computerized or noncomputerized sites. Rates of (1) prescribing errors and (2) potential adverse drug events as determined by the expert reviewers from conducting prescription reviews, chart reviews, and conducting patient surveys. (1) Sites with electronic prescribing contained errors in 4.3 percent of prescriptions, compared to 11.0 percent of prescriptions written at sites without electronic prescribing. (2) Sites with electronic prescribing contained potential ADEs in 2.6 percent of prescriptions, compared to 4.0 percent of prescriptions at sites without electronic prescribing. The authors note that the differences between the two groups in errors and prevented ADEs were not significant, but that the rates of prescribing errors and prevented ADEs could have been substantially reduced with more advanced CDS. The system provided no automatic checks for correct doses, frequencies, allergies, or drug interactions, and authors found that decision support (such as drug-dose checking and drug-frequency checking) could have prevented 97 percent of prescribing errors and 95 percent of potential ADEs. Kaushal, R., A.K. Jha, C. Franz, J. Glaser, K.D. Shetty, T. Jaggi, B. Middleton, G.J. Kuperman, R. Khorasani, M. Tanasijevic, and D.W. Bates. “Return on Investment for a Computerized Physician Order Entry System.” Journal of the American Medical Informatics Association, vol. 13, no. 3 (2006): 261-266. A cost-benefit assessment of the implementation of CPOE with CDS at Brigham and Women’s Hospital, a 720-adult bed tertiary care medical center in Boston from 1993 through 2002. Determined the capital and operational costs of implementing a CPOE with CDS and of each CDS intervention through internal documents and interviews with the CPOE developers and reviewing published literature. Whether or not the CDS intervention was active. Identified cost savings associated with specific CDS interventions. GAO grouped the savings into those resulting from: (1) decreased ADEs and (2) decreased drug costs. Of the estimated $28.5 million in estimated savings from the CPOE, (1) $12.9 million in estimated savings were due to CDS interventions that reduced ADEs, and (2) $6 million in estimated savings were due to CDS interventions that reduced drug costs. The cost to develop, implement, and operate the CPOE was $11.8 million, resulting in cumulative savings of $16.7 million. The CPOE was equipped with CDS. Kaushal, R., L.M. Kern, Y. Barrón, J. Quaresimo, and E.L. Abramson. “Electronic Prescribing Improves Medication Safety in Community-Based Office Practices.” Journal of General Internal Medicine, vol. 25, no. 6 (2010): 530-536. A pre-post study of 30 ambulatory care providers (15 electronic prescribers and 15 paper prescribers) in 12 practices in Hudson Valley region of New York (conducted from September 2005 to June 2007). The researchers collected 2 weeks of carbon copies and downloads of prescriptions to identify medication errors at baseline and 1 year follow-up and compared error rates among and between the electronic and paper prescriber groups. Whether or not the physicians’ medication orders were handwritten or submitted electronically through a stand-alone electronic prescribing system as identified through the carbon copies of prescriptions or prescription downloads. (1) Medication prescribing errors (including omitting the quantity or incorrect medication dose and duration), (2) illegibility errors, (3) near misses (i.e., potentially harmful errors that were intercepted or reached the patient but caused no harm), (4) ADEs, (5) rule violations (e.g., failing to write “po” for a medication taken orally), and (6) effects of CDS on medication errors. (1) The medication prescribing error rate among electronic prescribers decreased from 42.5/100 prescriptions at baseline to 6.6/100 prescriptions at 1 year follow-up. Electronic prescribers had a lower medication prescribing error rate than paper prescribers (6.6/100 v. 38.4/100). (2) Electronic prescribing eliminated all illegibility errors. (3) Electronic prescribers had fewer near misses (1.3/100 v. 2.7/100) than paper prescribers. (4) Rates of preventable adverse drug events trended lower among electronic prescribers (0.04 vs. 0.26 per 100 prescriptions). The authors noted that this was not a significant difference between electronic and paper prescribers. (5) Electronic prescribing eliminated nearly all types of rule violation errors. (6) Electronic prescribers had fewer errors judged preventable by advanced/basic CDS than paper prescribers at 1 year than paper prescribers. The stand-alone electronic prescribing system was equipped with CDS. Kim, G.R., A.R. Chen, R.J. Arceci, S.H. Mitchell, K.M. Kokoszka, D. Daniel, and C.U. Lehmann. “Error Reduction in Pediatric Chemotherapy: Computerized Order Entry and Failure Modes and Effects Analysis.” Archives of Pediatrics and Adolescent Medicine, vol. 160 (2009): 495-498. A pre-post study of chemotherapy orders written in a pediatric oncology unit. The study compared 1,259 paper orders written before implementation of the CPOE system (from July 31 to August 1, 2001, and from August 14, 2001, to August 22, 2002) to 1,116 electronic orders written after implementation of the CPOE system (from February 3, 2003, to February 12, 2004). Whether the orders were submitted before or after the implementation of the CPOE. A paper based survey was used to capture the pre-CPOE data, and the post-CPOE data were captured through the system. Data on chemotherapy steps of high morbidity/mortality potential if missed, as determined by attending oncologists. Findings reported by the study included: after CPOE implementation, daily chemotherapy orders (1) were less likely to have improper dosing, incorrect dosing calculations, missing cumulative dose calculations, and incomplete nursing checklists, and (2) had a higher likelihood of not matching medication orders to treatment plans. Ko, Y., J. Abarca, D.C. Malone, D.C. Dare, D. Geraets, A. Houranieh, W.N. Jones, W.P. Nichol, G.P. Schepers, M. Wilhardt. “Practitioners’ Views on Computerized Drug-Drug Interaction Alerts in the VA System.” Journal of the American Medical Informatics Association, vol. 14, no. 1 (2007): 56-64. A survey of 258 prescribers and 84 pharmacists from seven VA Medical Centers across the United States. The time period of the survey was between 2004 and 2005. Survey participants had prescribing authority in a VA Medical Center and an active outpatient practice. In the VA’s computerized patient record system, prescribers enter prescription orders electronically for review and verification by a pharmacist before dispensing. The survey asked respondent a variety of questions including those related to (1) respondent satisfaction with the combined inpatient and outpatient CPOE system (the computerized patient record system), (2) attitude towards drug–drug interaction alerts, and (3) suggestions for improving drug–drug interaction alerts. Findings reported in the study included the following: (1) in general, both prescribers and pharmacists indicated that the computerized patient record system had a positive effect on their jobs. Pharmacists revealed more favorable attitudes toward computerized patient record system than prescribers. (2) Sixty-one percent of prescribers felt that drug–drug interaction alerts had increased their potential to prescribe safely. Thirty percent of prescribers felt that drug–drug interaction alerts provided them with exactly what they needed most of the time. (3) Both prescribers and pharmacists agreed that drug–drug interaction alerts should be accompanied by management alternatives (73 percent and 82 percent, respectively) and more detailed information (65 percent and 89 percent, respectively). Kocakulah, M.C., and J. Upson. “Cost Analysis of Computerized Physician Order Entry Using Value Stream Analysis: A Case Study.” Research in Healthcare Financial Management, vol. 10, no. 1 (2005): 13-25. A case study of a 400-bed urban hospital, using value-stream mapping to conduct a cost analysis of a CPOE system. The study determined the potential costs and adverse drug reaction reductions related to CPOE implementation in this hospital, which did not have CPOE installed. This hospital did not have an electronic prescribing or CPOE system. Using published studies or reports and data from the hospital, this study determined (1) the projected decrease in medication errors, (2) the potential net savings, (3) net present value, and (4) project internal rate of return for a CPOE system based on the severity, average cost, and projected reduction of adverse drug reactions. (1) The percentage of illegible orders is projected to decrease by 78 percent, incomplete orders by 71 percent, incorrect orders by 46 percent, and drug therapy problems by 9 percent. (2) The projected net savings were $155,686 per year. (3) The projected project 5-year net present value was a negative $1,270,112. (4) The projected 5-year internal rate of return was negative 24 percent. Because of these projections, the authors did not recommend the hospital invest in a CPOE system at the current time. McCullough, J.S., M. Casey, I. Moscovice, and S. Prasad. “The Effect of Health Information Technology on Quality in U.S. Hospitals.” Health Affairs, vol. 29, no. 4 (2010): 647-654. A comparison study of 3,401 nonfederal acute-care U.S. hospitals from 2004 to 2007. Whether the hospital had an EHR and a CPOE system, as identified from information from the American Hospital Association’s annual survey and the HIMSS analytics database that describes hospitals’ health information technology adoption decisions. Performance on six process quality measures in the CMS Hospital Compare database. For nearly all measures, average quality was higher for hospitals with EHR and CPOE (with larger effects for academic hospitals than when compared to all hospitals). However, the difference was only significant for pneumococcal vaccine administration (2.1 percent increase) and use of the most appropriate antibiotic for pneumonia (1.3 percent increase). The study defined an EHR as a set of applications including a computerized patient record with a clinical data repository and some CDS capabilities, such as providing treatment recommendations. McMullin, S.T., T.P. Lonergan, and C.S. Rynearson. “Twelve-Month Drug Cost Savings Related to Use of an Electronic Prescribing System with Integrated Decision Support in Primary Care.” Journal of Managed Care Pharmacy, vol. 11, no. 4 (2005): 322-332. A comparison study of 38 primary care clinicians (19 electronic prescribing system users; 19 electronic prescribing system nonusers) conducted from June 2002 through May 2003. Whether or not the physician was using an electronic prescribing system with CDS capabilities as identified through the study design. Using pharmacy claims, determined (1) if the 6-month savings on new prescriptions were sustained during the 12-months of follow-up, (2) the 12-month cost savings associated with CDS on pharmacy claims, and (3) prescribing behavior of clinicians on eight high-cost therapeutic groups targeted by electronic messages to prescribers. (1) Savings seen in the last 6 months of the 12 month follow-up period were greater than the first 6 months ($748 per-member-per-month at 6 months to $794 at 12 months per-member-per-month). (2) Use of the electronic prescribing system was associated with a sustained decrease in prescription costs. Over the 12 month follow- up period, the average cost per new prescription for the intervention group decreased by $1.00 and increased by $3.75 in the control group. The number of other refilled prescriptions decreased more in the intervention group than in the control group. The number of new prescriptions increased slightly more in the intervention group than the controls. (3) Prescriptions for high-cost target medications overall decreased by 9.1 percent in the intervention group because of CDS and increased in the control group by 8.2 percent. Compared with the control group, the prescription ratio for high- cost drug classes was a relative 17.5 percent lower in the group using the CDS (35.8 percent versus 43.4 percent). The electronic prescribing system had integrated CDS, formulary, payor, and clinical guideline alert messaging capabilities. Peterson, J.F., G.J. Kuperman, C. Shek, M. Patel, J. Avorn, and D.W. Bates. “Guided Prescription of Psychotropic Medications for Geriatric Inpatients.” Archives of Internal Medicine, vol. 165, no. 7 (2005): 802-807. A comparison study at a tertiary care hospital, including 3,718 patients 65 years or older that were prescribed a psychotropic medication targeted in the intervention and admitted for medical, surgical, neurology, or gynecology services from October 8, 2001, to May 16, 2002. Whether the geriatric decision support system, which included medication dosing and selection guidelines for elderly patients, was activated. The study measured several outcomes including: (1) The rate at which prescriptions were written in agreement with expert recommendations regarding recommended daily dose for the initial drug order, (2) incidence of dosing at least 10-fold greater than the recommended daily dose, and (3) prescription of nonrecommended drugs. Findings presented included: (1) The prescriptions for psychotropic medications agreed with the system recommendations for dosing more frequently during the intervention periods when the geriatric decision support application was available. The agreement rate for both control periods was lower than the agreement rate for the intervention periods. (2) During the intervention periods, the incidence of 10-fold dosing decreased from 5.0 percent to 2.8 percent, (3) the prescription of nonrecommended drugs decreased from 10.8 percent to 7.6 percent. Ross, S.M., D. Papshev, E.L. Murphy, D.J. Sternberg, J. Taylor, and R. Barg. “Effects of Electronic Prescribing on Formulary Compliance and Generic Drug Utilization in the Ambulatory Care Setting: A Retrospective Analysis of Administrative Claims Data.” Journal of Managed Care Pharmacy, vol. 11, no. 5 (2005): 410-415. A comparison study of 110,975 paid pharmacy claims submitted by two groups—95 providers using predominantly electronic prescribing and a matched sample of 95 providers who did not electronically prescribe—between August 1, 2001, and July 31, 2002. Whether or not a provider used electronic prescribing during the study period. (1) Formulary compliance, which was assessed using the formulary code field in pharmacy data claims, and (2) generic utilization rates, which was assessed using First DataBank National Drug Data File Plus software to determine the brand or generic status of each drug. (1) Formulary compliance for both groups was similar. The electronic prescribing group was 83.2 percent compliant, compared to 82.8 percent compliance in the group that did not electronically prescribe. (2) Generic utilization rates were also similar, 37.3 percent for those who electronically prescribed and 36.9 percent for those that did not. The electronic prescribing system provided drug and formulary information during the prescribing process. Spencer, D.C., A. Leininger, R. Daniels, R. Granko, and R.R. Coeytaux. “Effect of a Computerized Prescriber-Order-Entry System on Reported Medication Errors.” American Journal of Health-System Pharmacy, vol. 62, no. 4 (2005): 416-419. A pre-post study and comparison of two medicine units at an academic hospital before and after implementation of a CPOE with CDS, compared to units in the hospital that did not implement a CPOE system. Data were collected over a period of 16 months. Whether the medication error was reported before or after the implementation of the CPOE system in two medicine units of the hospital and whether or not the medication error was reported from the two medicine units of the hospital that implemented CPOE. Reported medication errors and potential medication errors, as obtained from the hospital’s center for medication safety. Implementation of the CPOE system in the two units was associated with an increase in reported errors, from 0.068 per discharge preimplementation to 0.088 per discharge after implementation. The units in the hospital that did not implement CPOE systems had a decrease in the number of reported errors from 0.133 per discharge to 0.079 per discharge. The authors note that while the error rates increased in the units with CPOE, the error rates in the units in the hospital without CPOE decreased. Therefore, the increase in reported medication errors on units with CPOE systems may have been attributable to the direct or indirect consequences of introduction of the CPOE system. Steele, A.W., S. Eisert, J. Witter, P. Lyons, M.A. Jones, P. Gabow, and E. Ortiz. “The Effect of Automated Alerts on Provider Ordering Behavior in an Outpatient Setting.” PLoS Medicine, vol. 2, no. 9 (2005): 864-870. A pre-post study of the implementation and effect of alerts generated during medication ordering in primary care clinics. The baseline data were collected from August 1, 2002, to November 29, 2002, and the postintervention data were collected from December 1, 2002, to April 30, 2003. All provider staff entered medication orders using CPOE. The study design compared baseline ordering behavior (when no alert was triggered) to ordering behavior after alerts were triggered. (1) The number of medication orders not completed in response to an alert, (2) the number of rule-associated laboratory test orders initiated after an alert was displayed, as captured in the electronic prescribing system, and (3) the rates of adverse drug events assessed by completing file reviews on a random sample of medication orders. (1) Before the alerts were implemented, prescribers did not complete medication orders 5.4 percent of the time, compared to 8.3 percent of the time after the alerts were implemented. The authors noted that this was not a significant difference between the groups. When the alert was for an abnormal laboratory value, the percentage of times where the medication order was not completed increased from 5.6 percent at baseline to 10.9 percent during the intervention. (2) Comparing the pre- and postintervention periods for medication orders when no alert was displayed, prescribers ordered associated laboratory tests 17 percent of the time during the preintervention period, compared to 16.2 percent of the time in the postintervention period. The authors state that this finding was not significant and indicates that there was no trend, in general, to increased laboratory test ordering during the study period. (3) The preintervention group had a potential ADE in 10.3 percent of charts compared to in 4.3 percent of the charts in the postintervention group. The authors state that the difference between the groups was not significant and that the study was too small to show for sure whether there was any true effect on adverse drug reactions. Stone, W.M., B.E. Smith, J.D. Shaft, R.D. Nelson, and S.R. Money. “Impact of a Computerized Physician Order-Entry System.” Journal of the American College of Surgeons, vol. 208, no. 5 (2009): 960-969. A pre-post study of patient-safety measures before and after CPOE implementation at the Mayo Clinic Hospital in Phoenix, Arizona. The CPOE system was implemented from May 8, 2007, to April 30, 2008. Whether or not the physicians’ orders were submitted electronically using the CPOE system. (1) Medication errors and (2) order-implementation time. (1) There were no significant differences in the rate of medication errors in any of the study time periods, which were captured through self-reporting. (2) The time from a doctor placing an order, which was recorded or captured electronically, to a nurse receiving that order decreased from 41.2 minutes pre-CPOE to 27 seconds post- CPOE. Taylor, J.A., L.A. Loan, J. Kamara, S. Blackburn, and D. Whitney. “Medication Administration Variances Before and After Implementation of Computerized Physician Order Entry in a Neonatal Intensive Care Unit.” Pediatrics, vol. 121, no. 1 (2008): 123-128. A comparison, pre-post study of how the actual medication administration differed from the medication order before and after CPOE implementation. The study was conducted in the 30-bed Neonatal Intensive Care Unit at Madigan Army Medical Center from August 2004 to April 2006 (pre-CPOE: August 2004 to June 2005; post- CPOE: August 2005 to April 2006). Whether or not the physicians’ medication orders were handwritten or submitted electronically using a CPOE system. (1) Differences between the medication order and how the medication was actually administered. (2) Reasons for variances between the medication order and administration, as noted by the research nurses. (1) The variation between the medication order and how the medication was actually administered was lower post-CPOE than pre-CPOE (11.6 percent and 19.8 percent, respectively). (2) Findings related to rates of variance in medication order and administration in the pre- and post-CPOE included the following: similar variances in both periods were found for administration mistakes, pharmacy problems, and prescribing problems; and variances related to administration of drugs by the wrong route and the wrong time were significantly lower after CPOE implementation. The CPOE utilized CDS and display formats and defaults configured specifically for use in the Neonatal Intensive Care Unit for ordering prescriptions. Upperman, J.S., P. Staley, K. Friend, W. Neches, D. Kazimer, J. Benes, and E.S. Wiener. “The Impact of Hospitalwide Computerized Physician Order Entry on Medical Errors in a Pediatric Hospital.” Journal of Pediatric Surgery, vol. 40, no.1 (2005): 57-59. A pre-post study comparing orders written before the implementation of a CPOE system in a children’s hospital from January 2002 to October 2002 to those written after the implementation of CPOE system in November 2003 (the end point of the study period was not specified). Whether a prescription was written before or after the implementation of CPOE. The rate and types of ADEs determined by analyzing data collected at the hospital. ADE rates pre-CPOE were 0.3 per 1,000 doses, compared to 0.37 per 1,000 doses post-CPOE. The authors note that the study demonstrates a substantial decrease in harmful ADEs, but no significant difference in all ADEs between the pre- and post- CPOE periods. The rate of harmful ADEs pre-CPOE were 0.05 per 1,000 doses, compared to 0.03 per 1,000 doses post-CPOE. The CPOE had CDS. Vaidya, V., A.K. Sowan, M.E. Mills, K. Soeken, M. Gaffoor, and E. Hilmas. “Evaluating the Safety and Efficiency of a CPOE System for Continuous Medication Infusions in a Pediatric ICU.” AMIA Symposium Proceedings, 2006. A comparison study evaluating the safety of a CPOE system compared to a handwritten, hand-calculated method for prescribing continuous drug infusions for pediatric ICU patients. The time period of the study was not specified. Whether the orders for the drug infusions were generated in the CPOE system or through a handwritten, hand-calculated method. The (1) occurrence and (2) risk level of errors, as identified through a review of order sheets for errors. (1) The drug infusion orders generated using the CPOE system had fewer errors (4.3 percent) than those that were handwritten (73 percent). (2) Twenty-five percent of the errors in the handwritten group were judged to be “high-risk” compared to 0 percent in the CPOE group. All of the errors in the CPOE group were missing signatures. The CPOE included decision support. Varkey, P., P. Aponte, C. Swanton, D. Fischer, S.F. Johnson, and M.D. Brennan. “The Effect of Computerized Physician-Order Entry on Outpatient Prescription Errors.” Managed Care Interface, vol. 20, no. 3 (2007): 53-57. A retrospective survey of 4,527 prescriptions ordered from March 1996 through March 2002 at Mayo Clinic ambulatory clinics comparing prescriptions ordered through the clinic’s CPOE to handwritten orders. Whether or not the type of prescription generated was handwritten, computerized, or preprinted. The (1) prevalence and (2) type of pharmacist-intercepted prescription errors in computerized and handwritten prescriptions. (1) The frequency of intercepted prescription errors were highest in handwritten prescriptions (7.4 percent), followed by computerized prescriptions (4.9 percent), and preprinted prescriptions (1.7 percent). (2) The most commonly intercepted prescriptions involved the dosage form, dispense quantity, medication dosage, and drug allergies. CPOE resulted in lower rates in every type of intercepted prescription error, including form, dosage, quantity, allergy, frequency, drug name, patient name, illegibility, route, and drug–drug interaction, compared to handwritten prescriptions. The CDS included required fields and duplicate order checking. Wang, C.J., M.H. Patel, A. Schueth, M. Bradley, S. Wu, J.C. Crosson, P.A. Glassman, and D.S. Bell. “Perceptions of Standards- based Electronic Prescribing Systems as Implemented in Outpatient Primary Care: A Physician Survey.” Journal of the American Medical Informatics Association, vol. 16, no. 4 (2009): 493-502. Cross-sectional survey of physicians was fielded from October 2006 to December 2006 among physicians enrolled in a Blue Cross Blue Shield electronic prescribing sponsorship program. Whether or not the physician had installed an electronic prescribing system. (1) Adequacy of available drug formulary and medication history information and (2) perceptions of the electronic prescribing system’s enhancement of job performance. (1) Electronic prescribing users were more likely than nonusers to “agree” or “strongly agree” that the information available about the patient’s medication history helps them to identify clinically important drug–drug interactions and prevent callbacks from pharmacies for safety problems. Electronic prescribing users were slightly more favorable toward statements that electronic prescribing system drug coverage helps patients maintain lower drug costs. (2) Sixty-two percent of electronic prescribers “agreed” or “strongly agreed” that electronic prescribing improves the quality of care they can deliver. Weingart, S.N., B. Simchowitz, L. Shiman, D. Brouillard, A. Cyrulik, R.B. Davis, T. Isaac, M. Massagli, L. Morway, D.Z. Sands, J. Spencer, and J.S. Weissman. “Clinicians’ Assessments of Electronic Medication Safety Alerts in Ambulatory Care.” Archives of Internal Medicine, vol. 169, no. 17 (2009): 1627-1632. A survey mailed to 300 clinicians in December 2007 about the value of electronic prescribing. Whether clinicians adopted a commercial electronic prescribing system with a drug- allergy and interaction alerts drug reference database and used the electronic prescribing system to write at least 100 prescriptions per month between January 1 and June 30, 2006. (1) Clinicians’ satisfaction with electronic prescribing and (2) perceptions of the effects of electronic prescribing and alerts on the safety, efficiency, and cost of care. (1) Forty-seven percent were satisfied or very satisfied with medication safety alerts. Clinicians said electronic prescribing would improve the quality of care delivered (78 percent); prevent medical errors (83 percent); enhance patient satisfaction (71 percent); and improve clinician efficiency (75 percent). (2) Seventy-eight percent said at least one alert had caused them to change their behavior in the past 6 months. Fifty-seven percent said an alert might have prevented at least one error or injury in the average month. Twenty-two percent said an alert had prevented a serious error or injury in their practice. Sixty-three percent of respondents said an alert caused them to take action other than change an alerted prescription (counsel patient, look up information in a drug reference, or change how they monitor a patient). The study also reported participant ratings on potential problems associated with the drug allergy or interaction alerts. For example, 58 percent of respondents reported that alerts were triggered by discontinued medications. Yu, F.B., N. Menachemi, E.S. Berner, J.J. Allison, N.W. Weissman, and T.K. Houston. “Full Implementation of Computerized Physician Order Entry and Medication Related Quality Outcomes: A Study of 3364 Hospitals.” American Journal of Medical Quality, vol. 24, no. 4 (2009): 278-286. A comparison study of hospitals—264 that used a CPOE system to enter all orders and 3,100 that did not—over a 1–year period (July 2003 to June 2004). Whether the hospital reported on the HIMSS analytics survey that it entered all orders through CPOE. Performance on hospital quality-of-care measures from CMS. Of the 11 medication-related measures, the mean performance on 6 cardiovascular- related measures was higher among CPOE hospitals than non-CPOE hospitals, and the mean performance on one measure, antibiotics within 4 hours of arrival, was lower among CPOE hospitals than non-CPOE hospitals. Yu, F., M. Salas, Y. Kim, and N. Menachemi. “The Relationship Between Computerized Physician Order Entry and Pediatric Adverse Drug Events: A Nested Matched Case-Control Study.” Pharmacoepidemiology and Drug Safety, vol. 18, no. 8 (2009): 751-755. A comparison study between 54 pediatric hospitals that had CPOE systems and 68 pediatric hospitals that did not. Patient data were retrieved between October 1, 2005, and September 30, 2006. Whether a CPOE system was fully implemented for all orders and clinical domains, as identified through the HIMSS analytics database. The odds of ADEs, using data from the national association of children’s hospitals and related institutions case-mix comparative data program and HIMSS. The odds of experiencing an ADE were 42 percent higher for hospitals without CPOE compared to those with CPOE. Zhan C., R.W. Hicks, C.M. Blanchette, M.A. Keyes, and D.D. Cousins. “Potential Benefits and Problems with Computerized Prescriber Order Entry: Analysis of a Voluntary Medication Error-Reporting Database.” American Journal of Health-System Pharmacy, vol. 63, no. 4. (2006): 353-358. Comparison study of 120 facilities that reported having a CPOE in all clinical areas to 339 facilities that did not have a CPOE. Facilities included general community hospitals, specialty hospitals, and outpatient clinics. Data analyzed were from 2003. Whether the facility had CPOE, as determined by Medmarx, a national voluntary medication-error reporting database. (1) The number of errors reported by CPOE versus non-CPOE facilities and (2) the characteristics of errors caused by CPOE, as captured in the Medmarx database. The authors stated that the different facilities that self-reported data to the Medmarx database appeared to have different levels of underreporting of medication errors, and therefore, these data cannot be used to assess the potential benefits of CPOE or compare rates of medication errors between providers though facilities with CPOE had fewer inpatient errors, more outpatient errors, and smaller numbers of outpatient and inpatient errors that reached or harmed patients compared to facilities without CPOE. The article did not evaluate the sophistication of the CDS employed by the studied CPOE systems. This appendix provides additional details regarding our scope and methodology for reporting information on the providers who participated in and received incentive payments from the 2009 Electronic Prescribing Program. To conduct our analyses, we analyzed four Centers for Medicare & Medicaid Services (CMS) files. 2009 Electronic Prescribing Program Participation. We obtained a file from CMS in October 2010 that provided summary information for each provider that participated in the Electronic Prescribing Program in 2009, which CMS also used to make payments to providers for 2009. For each combination of national provider identifier and tax identification number, this file contained the following information: the total number of times each of the three electronic prescribing codes were submitted; the total number of applicable visits; whether CMS determined that the provider would receive an incentive payment; and the amount of the incentive payment. 2009 Electronic Prescribing Program Eligible Providers. We obtained a file from CMS in October 2010 that listed each provider that had at least one applicable visit for the Electronic Prescribing Program in 2009—which we refer to in this appendix as “applicable providers.” Over 597,000 providers had at least one applicable visit for the Electronic Prescribing Program in 2009. This number represents a count of all Medicare providers who had at least one applicable visit in 2009. However, not all of these providers have prescribing authority. Consequently, there may be some individuals included in the count of 597,000 providers that were not eligible for an electronic prescribing incentive payment. National Plan and Provider Enumeration System (NPPES) Downloadable File. We downloaded this file from CMS’s Web site (http://nppes.viva-it.com/NPI_Files.html) in October 2010. We used the variable “Provider Business Practice Location Address State Name” to obtain the state for providers. Provider Enrollment, Chain, and Ownership System (PECOS) Global Extract File. We obtained this file from CMS in October 2010. In the few cases when we were unable to obtain the state for providers using the NPPES Downloadable File, we attempted to determine the state for providers using either the “Practice Location State” variable or the “Correspondence Address State” variable from the PECOS Global Extract File. CMS determined which providers met or exceeded the reporting requirement for 2009 using each unique combination of providers’ national provider identifiers and tax identification numbers. However, we analyzed and report information at the national provider identifier level only so that we could present results for unduplicated providers. We were unable to match 1,052 applicable providers (less than 0.2 percent of applicable providers) to either the NPPES Downloadable File or the PECOS Global Extract file. To determine the percent of Medicare providers who received incentive payments by state and the average incentive payment by state using the state for each provider, we obtained state information for over 99 percent of applicable providers using data from the NPPES Downloadable File and for the remaining applicable providers using data from the PECOS Global Extract File. We excluded the about 0.2 percent of applicable providers mentioned above that we could not match to either the NPPES Downloadable File or the PECOS Global Extract File. In addition, we excluded about another 0.2 percent of applicable providers for whom we were unable to obtain state information, the 0.9 percent of applicable providers who were from U.S. insular areas, and six providers whose state information we deemed unreliable. Appendix IV: Maximum Electronic Health Record (EHR) Program Incentive Payments, Based on First Year of Payment Maximum EHR incentive payments by year ual to 75 percent of the provider’s total allowed charges for services covered by Medicare Part B for the year, but are subject to the annual limits displayed in this table. 1. Perform medication reconciliation for more than 50 percent of all transitions of care. 2. Enter medication order into computerized physician order entry (CPOE) system for more than 30 percent of patients with at least one medication in their medication lists.3. Enter medication lists or indicate no current prescriptions for 2. Enable the EHR system’s ability to check a prescription against a formulary and maintain access to at least one internal or external drug formulary for the entire EHR reporting period. 3. Incorporate as structured data more than 40 percent of all clinical lab tests results ordered. allergies for more than 80 percent of patients.5. Enable the EHR system’s ability to check a prescription for 4. Generate at least one list of patients by a specific condition. 5. Send reminders during the EHR reporting period for potential drug–drug and drug–allergy interactions. 6. Record as structured data demographics for more than preventative or follow-up care to more than 20 percent of patients aged 65 and over or aged 5 and younger. 6. Provide electronic access to health information within 4 7. Record as structured data list of current and active diagnoses business days of being updated in the EHR system to more than 10 percent of patients. or indicate no known problems for more than 80 percent of patients. 7. Provide patient-specific education resources to more than 8. Record as structured data height, weight, and blood pressure 10 percent of all patients. for more than 50 percent of patients aged 2 and over. 8. Provide summary of care record to more than 50 percent of 9. Record as structured data smoking status for more than transitions of care and referrals. 50 percent of patients aged 13 and over. 9. Perform at least one test of certified EHR technology’s 10. Implement one clinical decision support rule relevant to capacity to submit electronic data to immunization registries and follow-up submission if the test is successful. Medicare & Medicaid Services (CMS) or the states. 12. Provide electronic copy of health information within 3 business capacity to provide electronic syndromic surveillance data to public health agencies and follow-up submission if the test is successful. days to more than 50 percent of all patients who requested that information. 13. Provide clinical summaries to patients within 3 business days for more than 50 percent of all office visits. 14. Perform at least one test of certified EHR technology’s capacity to electronically exchange key clinical information (i.e., problem list, medication list, medication allergies, or diagnostic test results). 15. Protect electronic health information created or maintained by the certified EHR technology by conducting or reviewing a security risk analysis, implementing security updates as necessary, and correcting identified security deficiencies. uirement is electronic prescribing-related. uality measures help uantify health care processes, outcomes, patient perceptions, and organizational structure. To meet this reporting reuirement, providers must report on 6 out of 44 clinical uality measures identified by CMS. The reuirement is public health–related. In addition to the contact name above, Robert Copeland, Assistant Director; Nick Bartine; George Bogart; Julianne Flowers; Krister Friday; Toni Harrison; Daniel Lee; Shannon Legeer; and Sarah Marshall made key contributions to this report.
Congress established two CMS-administered programs--the Electronic Prescribing Program and the Electronic Health Records (EHR) Program--that provide incentive payments to eligible Medicare providers who adopt and use health information technology, and penalties for those who do not. The Medicare Improvements for Patients and Providers Act of 2008 required GAO to report on the Electronic Prescribing Program. To do so, GAO examined how CMS determines which providers receive incentive payments and avoid penalties from that program and how many providers received incentive payments in 2009. Also, GAO was asked to examine how the requirements of the two programs compare. GAO reviewed relevant laws and regulations, interviewed CMS officials, and analyzed CMS data on incentive payments made for 2009, which were the most recent data available for a full year. CMS analyzes information reported by eligible providers on their Medicare Part B claims--which are used to submit charges for covered services--to determine which Medicare providers should receive Electronic Prescribing Program incentive payments or be subject to penalties. In 2009--the first year the program provided incentive payments--CMS paid approximately $148 million in incentive payments to about 8 percent of the approximately 600,000 Medicare providers who had an applicable patient visit--that is, supplied 1 of 33 CMS-designated services typically provided in the office or outpatient setting. For 2009, CMS examined Part B claims to determine whether, after each applicable patient visit, providers marked any one of three electronic prescribing reporting codes used to report information on the adoption and use of electronic prescribing systems. To receive an incentive payment that year, the provider had to report the codes for at least 50 percent of applicable patient visits, and at least 10 percent of the provider's total allowed Medicare Part B charges for the year had to be from the applicable patient visits. CMS made changes in the reporting requirements for 2010. For example, the agency reduced the number of reporting codes to one and required that individual providers report the code after at least 25 applicable visits, instead of for 50 percent of applicable visits. From 2012 through 2014, the Electronic Prescribing Program will assess penalties on providers that do not adopt and use electronic prescribing. Individual providers will have to submit the electronic prescribing reporting code at least 10 times in the first 6 months of 2011 to avoid penalties in 2012. Although GAO found similarities in the technology and reporting requirements for both programs, GAO also found that the requirements of the two programs are inconsistent in several areas. The EHR Program provides incentives from 2011 to 2016 and introduces penalties beginning in 2015, while the Electronic Prescribing Program provides incentives from 2009 to 2013 and provides for penalties from 2012 to 2014, when the program ends. Both the EHR and Electronic Prescribing Programs require providers to adopt and use technology that can perform similar electronic prescribing-related activities. However, the EHR Program requires providers to adopt and use certified EHR systems that meet criteria established by HHS, which include electronic prescribing-related capabilities, while the Electronic Prescribing Program does not have a certification requirement. As a result, providers have no assurance that the systems they invest in will meet the Electronic Prescribing Program's requirements. Additionally, the two programs have established separate reporting requirements related to electronic prescribing, potentially requiring physicians--the largest and only group of providers eligible to earn incentive payments in both programs--to report to both programs from 2011 through 2014. CMS recognizes that this duplication places additional burden on physicians; however, CMS is still in the process of developing a strategy to address this duplication. GAO is recommending that the CMS Administrator take four actions, including (1) encourage physicians and other providers in the Electronic Prescribing Program to adopt certified technology and (2) expedite efforts to remove the overlap in reporting requirements for physicians who may be eligible for incentive payments or subject to penalties under both programs. CMS generally agreed with three recommendations and disagreed with a fourth recommendation, which GAO clarified based on CMS's comments.
The DATA Act became law in May 2014 and holds considerable promise for shedding more light on how federal funds are spent. To improve the transparency and quality of the federal spending data made available to the public, the DATA Act directed OMB and Treasury to establish government-wide data standards that include common data elements for reporting financial and payment information by May 2015. Under the act, federal agencies must begin reporting financial spending data using these standards by May 2017 and publicly post spending data in a machine- readable format by May 2018. The DATA Act also requires that OMB, or an agency it designates, establish a pilot program to facilitate the development of recommendations to (1) standardize reporting elements across the federal government, (2) eliminate unnecessary duplication in financial reporting, and (3) reduce compliance costs for recipients of federal awards. The act established reporting requirements and timeframes for implementation of the pilot. See figure 1 for a timeline of these deadlines. The DATA Act also sets specific requirements related to the pilot’s design. First, the pilot must collect data during a 12-month reporting cycle. The pilot must also include a diverse group of recipients such as awardees receiving a range of awards as long as the total value of the awards falls within the statutory range. To the extent practicable, the pilot is to include recipients who receive federal awards from multiple programs across multiple agencies. Finally, the pilot must include a combination of federal contracts, grants, and subawards with an aggregate value between $1 billion and $2 billion. In addition, OMB must review the information recipients are required to report to identify common reporting elements across the federal government, unnecessary duplication in financial reporting, and unnecessarily burdensome reporting requirements for recipients of federal awards. This review is to be done in consultation with relevant federal agencies and recipients of federal awards, including state and local governments and institutions of higher education. A well-developed and documented pilot program can help ensure that agency assessments produce information needed to make effective program and policy decisions. Such a process enhances the quality, credibility, and usefulness of evaluations in addition to helping to ensure that time and resources are used effectively. We have identified five leading practices that, taken together, form a framework for effective pilot design. To identify these practices, we reviewed our prior work as well as academic literature related to the design of pilot and evaluation programs. By following these leading practices, agencies can promote a consistent and effective pilot design process. We shared these practices with OMB, HHS, and GSA staff, who found them to be reasonable and appropriate, and applicable to the Section 5 Pilot. 1. Establish well-defined, appropriate, clear, and measurable objectives. Such objectives should have specific statements of the accomplishments necessary to meet the objectives. Clear and measurable objectives can help ensure that appropriate evaluation data are collected from the outset of pilot implementation so that data will subsequently be available to measure performance against the objectives. Broad study objectives should be translated into specific, researchable questions that articulate what will be assessed. 2. Clearly articulate assessment methodology and data gathering strategy that addresses all components of the pilot program and includes key features of a sound plan. Key features of a clearly articulated methodology include a strategy for comparing the pilot implementation and results with other efforts, a clear plan that details the type and source of the data necessary to evaluate the pilot, and methods for data collection including the timing and frequency. 3. Identify criteria or standards for identifying lessons about the pilot to inform decisions about scalability and whether, how, and when to integrate pilot activities into overall efforts. The purpose of a pilot is generally to inform a decision on whether and how to implement a new approach in a broader context. Therefore, it is critically important to consider how well the lessons learned from the pilot can be applied in other, broader settings. To assess scalability, criteria should relate to the similarity or comparability of the pilot to the range of circumstances and population expected in full implementation. The criteria or standards can be based on lessons from past experiences or other related efforts known to influence implementation and performance as well as on literature reviews and stakeholder input, among other sources. The criteria and standards should be observable and measureable events, actions, or characteristics that provide evidence that the pilot objectives have been met. Choosing well-regarded criteria against which to make comparisons can lead to strong, defensible conclusions. 4. Develop a detailed data-analysis plan to track the pilot program’s implementation and performance and evaluate the final results of the project and draw conclusions on whether, how, and when to integrate pilot activities into overall efforts. A detailed data-analysis plan identifies who will do the analysis as well as when and how data will be analyzed to measure the pilot program’s implementation and performance. The results will show the successes and challenges of the pilot, and in turn, how the pilot can be incorporated into broader efforts. Some elements of a detailed data-analysis plan include talking to users, managers, and developers; evaluating the lessons learned to improve procedures moving forward; and other appropriate measures. 5. Ensure appropriate two-way stakeholder communication and input at all stages of the pilot project, including design, implementation, data gathering, and assessment Appropriate two-way stakeholder communication and input should occur at all stages of the pilot, including design, implementation, data gathering, and assessment. Failure to effectively engage with stakeholders, and understand and address their views can undermine or derail an initiative. To that end, it is critical that agencies identify who the relevant stakeholders are, and communicate early and often to address their concerns and convey the initiative’s overarching benefits. OMB has established a Section 5 Pilot with two primary focus areas—one on federal grants and another on federal contracts (procurement). OMB’s Office of Federal Financial Management is responsible for the grants portion of the pilot and has designated the Department of Health and Human Services (HHS) to serve as its executing agent. On the contracting side, OMB’s OFPP is responsible for leading the procurement portion and is working with various entities including 18F and the Chief Acquisitions Officers’ Council (CAOC). Specifically, 18F is designing the system to be tested as part of the pilot. GSA’s Office of Government-wide Policy is responsible for providing federal register notices; and its Integrated Award Environment provides guidance and technical considerations. OMB launched a number of pilot-related initiatives in May 2015 and expects to continue activities until at least May 2017. As the executing agent for the grants portion of the pilot, HHS has developed six “test models” that will evaluate different approaches to potentially reducing grantee reporting burden. These six models are the specific grants tools, forms, or processes that will be tested and analyzed under the pilot to determine if adopting these changes will actually contribute to the program’s objectives of reducing reporting burden, duplication, and compliance costs. Taken as a whole, the six test models examine a variety of grant reporting issues that HHS has identified as presenting challenges. HHS officials told us that they have received comments through the National Dialogue, a website for grant recipients and contractors to discuss issues including compliance costs, reporting burden, eliminating duplication, and standardizing processes. In addition, the officials obtained feedback on areas of concern from grantees involved in earlier HHS efforts to streamline grants reporting. They used that information to inform the development of the six test models. Officials from advocacy groups representing grant recipients and federal contractors told us that they initially expected the grants portion of the pilot to be an extension of the Grants Reporting Information Project (GRIP) proof of concept that was launched following the enactment of the American Recovery and Reinvestment Act of 2009 rather than the six test models. HHS officials told us they would have liked to more fully replicate the GRIP, however, that would have required broader participation from agencies than was available for the Section 5 Pilot. The following provides high-level summaries of each of the six test models. For additional details, see appendix II. HHS intends to assess whether an online and searchable repository for data standards will facilitate grant reporting. To do this, HHS developed the Common Data Element Repository (CDER) Library, which is intended to be an authorized source for data elements and definitions for use by the federal government and recipients reporting grant information. The CDER Library is also intended to encourage the use of common definitions for grants-related terms by nonfederal stakeholders and federal agencies. As of March 2016, the publicly-available version of the CDER Library contained 112 data elements from a variety of sources, including the Federal Acquisition Regulation (FAR), OMB Circular A-11, and the Uniform Grant Guidance. It also included several data elements standardized in accordance with DATA Act requirements. HHS has developed a version of the CDER Library, accessible only to federal agencies, that contains a much more detailed database of more than 9,000 elements. This federal-agency-only version of the CDER Library also identifies which grant reporting forms these data elements come from so that users can see how many forms require the same data element and which agencies request that information from grantees. HHS officials told us that they believe the CDER Library has the potential to be a powerful tool for streamlining definitions and forms. HHS intends to test whether it will be possible to use a consolidated Federal Financial Report (FFR) to allow grantees to submit multiple reporting forms into one system. The FFR, reported on the Standard Form 425, is used for reporting grants expenditures for the recipients of federal assistance. HHS believes that a consolidated FFR will allow participants to submit complete information once instead of through multiple entry points. A consolidated FFR could provide a single point of data entry, earlier validation of FFR data, and potential future streamlining of the grants close-out process. According to HHS officials, this test model is intended to be a continuation of the GRIP launched during the American Recovery and Reinvestment Act of 2009. The aim of that effort was to determine the feasibility of developing a centralized government-wide collection system for federal agencies and recipients of federal awards. HHS is examining ways to reduce duplicate and redundant information contained in Single Audit forms. The Single Audit Act requires states, local governments, and nonprofit organizations expending $750,000 or more in federal awards in a year to obtain an audit in accordance with the requirements set forth in the act. HHS intends to test whether some grant forms related to the single audit could be combined. HHS plans to examine whether a consolidated Notice of Award coversheet might reduce reporting burden by allowing grant recipients to locate required reporting data in one place, rather than attempting to find information on coversheets that differ by agency. HHS added a new section to the Grants.gov website, called Learn Grants, intended to make it easier for stakeholders to find, learn about, and apply for federal grants. The Learn Grants website provides links to grant policies, processes, funding, and other grant lifecycle information. HHS officials said they want to use this test model to determine whether the Learn Grants site could effectively engage stakeholders and provide training early in the grants lifecycle process that, in turn, would have a positive effect on recipient compliance during post-award activities. The procurement portion of the pilot will be focused on examining the feasibility of centralizing the reporting of certain required information. Depending on the contract, there may be many types of information contractors must report. OFPP staff told us the pilot will initially focus on the reporting of certified payroll. This is one specific FAR requirement only applicable to contracts for construction within the United States. Specifically, OFPP has identified opportunities to improve upon the current unstandardized reporting format under which some employers report data electronically while others use manual paper processes. Further, OFPP intends to identify which data elements would be included in reporting, the method of data transmission, and other related details. This narrow approach stands in contrast to the grants portion of the pilot where HHS has a broader, more comprehensive plan to explore several areas where grantee reporting burden might be reduced. OFPP staff explained that its decision to focus on certified payroll reporting arose out of feedback from the procurement community. They also noted that the Section 5 Pilot is one of a number of government-wide initiatives to reduce contractor burden and streamline procurement processes, such as GSA’s Integrated Award Environment initiative to integrate acquisition systems into one streamlined environment. To better understand the issue of certified payroll reporting and its potential suitability as a subject for the procurement portion of the Section 5 Pilot, the CAOC engaged GSA’s 18F through an interagency agreement to interview contractors, contracting officers, business owners, government employees, and subject-matter experts (SME). As a result of that effort, 18F identified major categories of burdens and constraints related to certified payroll reporting and potential recommendations on how to address them. OFPP staff said they once again worked with 18F in winter 2016 to gather requirements for building a prototype system to centralize the reporting of certified payroll data. The 18F staff we spoke with noted that they will build a prototype to explore potential solutions for reducing contractor burden through user research and testing. OFPP staff will develop and evaluate metrics for the pilot. OFPP intends to test the system in summer 2016. In May 2015, OMB, CAOC, GSA, and HHS launched the National Dialogue, a website for grant recipients and federal contractors to discuss issues including compliance costs, reporting burden, eliminating duplication, and standardizing processes. OMB staff told us that they used the National Dialogue as a feedback mechanism for the grants and procurement portions of the pilot. This was one of the first publicly announced pilot-related activities. The website will accept comments through May 2017. OMB and GSA staff told us that they plan to actively review and address the input they receive. The website is intended to be a useful tool for obtaining information about issues of concern to their respective communities. Discussions related to grantee reporting have been significantly more active than those focused on procurement. Although the comments vary widely in topic, there are a number of substantive suggestions for how grantee reporting burdens can be reduced. While HHS officials told us that the dialogue was intentionally designed so that feedback would be submitted anonymously, some commenters have self-identified the institution they represent, including the Council on Governmental Relations, Association on American Universities, Association of Public and Land-grant Universities, and Coalition for Government Procurement. If HHS effectively implements its stated plans for the grants portion of the Section 5 Pilot, it is likely that the grants portion of the pilot will comply with the act. These requirements call for the grants portion’s design to include the following: DATA Act Requirement 1: Collect data during a 12-month reporting cycle. HHS’s November 2015 design documentation shows that it will begin collecting data for these six test models by May 2016. This would allow for data to be collected on these test models during a 12-month reporting cycle before May 2017, when the pilot is required to terminate. We believe these timeframes should provide sufficient time for HHS to incorporate public comments by May 2016 and allow for a full 12-month data collection cycle. DATA Act Requirement 2: Include a diverse group of federal award recipients and, to the extent practicable, recipients who receive federal awards from multiple programs across multiple agencies. HHS officials told us that they have developed a detailed plan to select participants, which will include state and local governments, universities, and other types of grant recipients. HHS officials explained that the grants portion of the pilot will include recipients who received a range of federal funding amounts and will not be limited to one agency or grant program. HHS officials initially told us that they could not provide us with the revised plan because it was still under review by OMB. We did receive a copy of the revised plan at the end of March 2016, but because of the timing we were unable to fully review it in time for the release of this report. We will provide our assessment of the plan as part of future work as we continue to monitor the design and implementation of the Section 5 Pilot. DATA Act Requirement 3: Include a combination of federal contracts, grants, and subawards, with an aggregate value of not less than $1 billion but not more than $2 billion. HHS officials told us that they are still determining how to meet the requirement for total award value because they want to ensure the pool of pilot participants is as diverse and large as possible while still being legally compliant. Specifically, one of their selection considerations is the award value of grants received by awardees. Further, HHS officials have explored strategies to ensure that they do not exceed the maximum dollar amount threshold. HHS officials told us that they expect to make decisions related to how to meet this requirement in early 2016. We have concerns about the extent that the design of the procurement portion of the pilot reflects the requirements specified in the DATA Act. OFPP’s plans to address those statutory design requirements discussed below reflect the status of the procurement portion of the pilot described by OFPP staff and related documents we reviewed. DATA Act Requirement 1: Collect data during a 12-month reporting cycle. The design of the procurement portion of the pilot is at risk of not including data collected during a 12-month reporting cycle in a meaningful way. To meet this requirement, OFPP and GSA would need to begin collecting data no later than May 9, 2016. When we spoke with OFPP staff, they stated that by launching the National Dialogue in May 2015, they believe they will have met the act’s requirements that data collection take place during a 12-month reporting cycle. Further, staff also considered comments received from other efforts including the Open Dialogue on Improving How to Do Business with the Federal Government conducted in 2014 to meet this requirement. However, neither of these dialogues included comments that specifically mentioned the issue of certified payroll. As a result, we do not believe those comments provide meaningful and relevant data on the effectiveness of a centralized portal for certified payroll reporting. As a result of design and development delays, OFPP will not be able to collect meaningful and useful data for the procurement portion of the pilot until summer 2016, when it expects to complete the development of a centralized portal through which participants will submit certified payroll data. OFPP started exploring ways to streamline certified payroll reporting in spring 2015. OFPP said that due to staffing challenges, work on designing a prototype for a system to be tested under the pilot did not begin until late February 2016. At that time, the CAOC signed an agreement with GSA’s 18F to begin what it expected to be a 10-week design period. Cognizant staff expect this design work will take place between March and May 2016. However, a contractor cannot begin building an actual “production” version of the system to be tested under the pilot until 18F designs the prototype, which is expected to be completed by the beginning of May 2016. Therefore, this leaves at most a few weeks to develop the centralized reporting portal before May 9, 2016—the date which the pilot must begin for meaningful and useful data to be collected in a full 12- month period. OFPP staff told us that they do not intend to begin testing a centralized reporting portal until late summer 2016. According to OFPP and GSA staff, they were faced with delays due to bid protests related to the contracting mechanism GSA intends to use to select a contractor to build the portal to be tested under the pilot. However, as of March 2016, these bid protests have been resolved and no longer present a barrier in awarding the contract. While we agree that these protests could pose a barrier to awarding the contract to develop the testing portal, we do not believe that OFPP needed to wait until they were resolved before moving forward with 18F’s development of a prototype for the portal. Given the resolution of these bid protests, OFPP staff said that they are working with 18F to assess the feasibility of expediting project timelines to launch the prototype sooner than expected so that they could potentially collect 10 months of data through the certified payroll reporting portal. Given the weekly or bi-weekly reporting of certified payroll, this approach may result in a sufficient amount of meaningful and useful data on which OFPP can base conclusions related to its hypothesis. However, it is important that OFPP clearly conveys and documents its rationale for how its approach will contribute to the collection of meaningful and useful data consistent with the timeframes established under the act. DATA Act Requirement 2: Include a diverse group of federal award recipients and, to the extent practicable, recipients who receive federal awards from multiple programs across multiple agencies. OFPP and GSA do not yet have a detailed plan for selecting participants that will result in a diverse group of recipients with awards from multiple programs and agencies. However, there is some documentation related to OFPP’s approach for selecting participants in the project plan and in a Federal Register notice issued on November 24, 2015. For example, the draft plan identifies the Federal Procurement Data System-Next Generation as the mechanism that will be used for identifying which contracts and contractors to include in the pilot. OFPP staff also told us that they intend to cover both large and small industries. While valuable information, these documents do not clearly convey how the procurement portion of the pilot would specifically contribute to meeting the act’s requirement regarding diversity of participants. OFPP staff told us that for the purposes of meeting the pilot requirements they consider any individual or group that provided information to the National Dialogue to be a participant in the pilot. However, as previously mentioned, individuals and groups that have commented on the National Dialogue did not provide any comments related to certified payroll. Therefore, it is unclear how they could be considered pilot participants. Additionally, OFPP staff were unable to tell us how they plan to count commenters that are not contract awardees, but instead are organizations representing groups of federal contractors. It is unclear how OFPP can ensure the universe of commenters is diverse because it does not control who comments on the dialogue. OFPP staff stated that they also intend to select participants for testing their prototype system using a nongeneralizable sample of contractor data reported through the Federal Procurement Data System-Next Generation. However, they did not provide us with specific information on how they would ensure that the sample met all requirements under the act, nor did they provide a detailed, documented sampling plan equivalent to the grants portion of the pilot. As a result, it will be important for OFPP to clearly document its rationale for how its approach will allow for the inclusion of a diverse group of federal contractors, as required by the act. DATA Act Requirement 3: Include a combination of federal contracts, grants, and subawards, with an aggregate value of not less than $1 billion but not more than $2 billion. OFPP staff told us OMB could meet this dollar range requirement through the grants and procurement portions of the pilot collectively. Under such an approach, it would be important for each portion of the pilot to know how much it is contributing to meet the required award range. Our understanding of the grants portion of the pilot suggests that that it has a plan for doing this. Less apparent are the specifics of how the procurement portion of the pilot would do so. We assessed the designs of the grants and procurement portions of the pilot against leading practices that we identified from our prior work and other sources. In continuation of our constructive engagement approach for working with agencies implementing the DATA Act, we shared the results of our analysis with HHS and OFPP staff who told us that they will consider our input as they continue to update and revise their plans. HHS’s November 2015 design for the grants portion of the pilot generally applied leading practices. As noted above, while we have received a revised plan for the design of the grants portion, we were unable to fully review it in time for the release of this report. We will provide our assessment of that plan in a forthcoming review that will focus on the pilot’s implementation. DATA Act Grants Test Models Under the Office of Management and Budget’s (OMB) direction, the Department of Health and Human Services (HHS) intends to develop recommendations for reducing grantee reporting burden by testing different areas. HHS will develop and test: An online repository for data elements and definitions that is intended to be an authoritative source for data elements and definitions, called the Common Data Element Repository (CDER) Library. A federal agency-only version of the CDER Library containing more than 9,000 grants data elements that identify which specific grant forms these data elements come from, so that users can see how many forms require the same data element and which agencies request that information. Leading Practice 1: Establish Well-Defined, Appropriate, Clear, and Measurable Objectives. Each of the six grants test models at least partially met the leading practice that pilots have well-defined, appropriate, clear, and measurable objectives. For example, one of the Single Audit test models has the clearly defined objective of testing whether two forms containing duplicative information can be combined to reduce recipient reporting burden. This objective is measurable and appropriately linked to the purposes of the Section 5 Pilot overall, which include eliminating unnecessary duplication in financial reporting and reducing compliance costs for recipients of federal awards. In another example, one of the CDER Library test models has a clearly established objective of determining whether access to an authoritative source for common data element definitions would help grant recipients complete necessary forms accurately and in a timely manner. The CDER Library test model also identifies specific metrics that would allow them to measure whether they are able to achieve its stated objectives. In our initial review of these test models, we provided feedback to HHS that the other CDER Library test did not have a clear, fully established objective. In response, HHS officials explained that the objective of that test model is to compare data elements and forms used across the federal government with the goal of consolidating these forms and ultimately passing on reporting efficiencies to grant recipients. Leading Practice 2: Clearly Articulate an Assessment Methodology. Five of the six test models did not clearly articulate an assessment methodology. In contrast, for the Learn Grants test model, HHS described how it planned to use webinars, conference presentations, and other events to increase awareness inside and outside of government about the grants-related resources available on Grants.gov. The plan also includes a detailed timeline for executing the test model, as well as HHS’s methodology for conducting pre- and post-tests of pilot participants. HHS officials told us that they worked with a federal SME with previous experience working on Grants.gov to help develop and refine the assessment methodology. The remaining five test models have less clearly articulated assessment methodologies. For example, for the consolidated FFR test model, HHS said it will survey grant recipients on their experiences when submitting their reports into one system rather than multiple entry points; but we found that the plans lacked detail about how surveys will be designed and administered. In addition, the plan did not provide specific information about the participants HHS intends to survey, nor did it provide details regarding how HHS will compare survey results for recipients in the pilot versus those not participating in the pilot. In meetings with senior HHS officials, we raised these and similar concerns about the Notice of Award test model and one of the CDER Library models. For the other CDER Library test model, we found that HHS’s plans did not identify the data sources or metrics that would be used in the assessment methodology. In those feedback meetings, HHS officials said many of the concerns have been addressed in the revised plan. Leading Practice 3: Ensure Scalability of Pilot Design. HHS documented an overall structure for how each test model is integrated into the overall grants portion of the pilot. However, the documented design lacks specific details about how HHS intends to evaluate the performance of each test model to inform decisions about scalability. Specifically, five of the six test models include either no or few specifics about how any observed reduction in burden could be generalizable beyond the context of the pilot. For example, HHS’s plan for the consolidated FFR test model indicates that it will be tested using grantees who receive awards from the Administration for Children and Families (ACF), a subunit of HHS. However, the plan does not specify how ACF will select participants or how results from ACF grant recipients can be applied government-wide. HHS officials told us that ACF has a list of potential participants. Given the size and complexity of ACF’s grant recipients, the officials believed that these participants would provide a good basis for scalability should the FFR test model prove to be successful. According to HHS officials, they have developed a comprehensive sampling plan for selecting participants for each of the six test models. They will reach out to selected participants to begin data collection in May 2016. We have recently been provided with the draft sampling plan and will provide our assessment of it in our forthcoming review on the implementation of the Section 5 Pilot. Leading Practice 4: Develop a Plan to Evaluate Pilot Results. The design for five of HHS’s six test models provides some level of detail on how it plans to evaluate pilot results. For instance, HHS’s Learn Grants test model provides a description of a methodology to measure knowledge about the grants lifecycle. It will compare a group of recipients that has access to certain grant resources contained in a public on-line portal to another group of recipients that does not. HHS’s plans indicate that the results from both tests will be analyzed to evaluate knowledge gained by participants to draw conclusions about the effectiveness of the Learn Grants tab on the Grants.gov website. However, the documented pilot design lacks specific detail on how HHS plans to analyze the data it gathered and how it will draw conclusions about integrating the pilot activities into overall grant reporting efforts. For example, both CDER Library test models reference an analysis plan for evaluating to see if burden has been reduced. The plans do not indicate how HHS would determine if a particular time threshold represents a true reduction in burden and whether that burden is measured in minutes, hours, or some other unit of analysis. Similarly, the Single Audit and Notice of Award test models indicate that HHS will use results from surveys and focus groups, including documenting benefits and challenges raised by participants; yet HHS’s plans for these two test models do not specify how HHS will compile these results and distill them into actionable recommendations. HHS officials told us that their revised planning documents are to include this additional level of detail to address our concerns. Leading Practice 5: Ensure Appropriate Two-Way Stakeholder Communication. HHS has engaged in two-way stakeholder communications for all six of its test models. It also has taken a number of actions to obtain input from grant recipients including posting questions on the National Dialogue to solicit feedback on how to ease grantee reporting burden. Further, HHS has been involved in a number of outreach activities including presentations at conferences, town hall events, and webinars to identify areas of reporting burden and duplication, and to collect ideas to streamline reporting. HHS also used these forums to provide updates on the progress of the design and specific information on the six test models. HHS supplemented input received through the National Dialogue with feedback from SME to help design the test models. An HHS official told us they identified SMEs based on their experience working with federal grants, grant recipients, and systems being tested. HHS officials provided several examples of how they engaged in two-way communication with stakeholders when developing their test models. For example, HHS consulted with a federal official who used to work for Grants.gov to help develop the Learn Grants test model and the pre- and post-test evaluations associated with it. For the FFR test model, HHS consulted with officials who work in ACF and the Payment Management System. HHS also worked with other SMEs from across the federal government to develop other test models. According to a HHS official, SMEs were asked to critically assess the methodology for each of the models with the intent of making each model more effective. More recently, in January 2016, HHS pre-tested proposed Section 5 Pilot test models and obtained feedback on ways to improve them with advocacy groups representing those in the grants recipient communities including state and local governments as well as research universities. Also included were representatives from the auditing and software development industries. HHS officials told us that they have made significant revisions in response to the pre-tests and feedback to their documented design. However, HHS has additional opportunities to foster two-way dialogue with recipients of federal funds. Officials from advocacy groups representing federal funding recipients told us that they are still waiting for information about how their membership can be more engaged in the pilot process. For example, an official from the National Association of State Auditors, Comptrollers, and Treasurers told us that following a webinar for their membership hosted by the Association of Government Accountants in November 2015 on the Section 5 Pilot, they collected the names of more than 20 state and local government representatives who were interested in participating in the grants portion of the pilot. This official said the names were given to HHS, but the association has not received any information on how these volunteers can participate in the pilot. HHS officials said that once they receive OMB approval on their sampling methodology for selecting participants, they will be able to reach out to those who expressed interest in being a part of the pilot. We provided our assessment of the design of the grants portion of the pilot to HHS officials, who told us that they generally concurred with our analysis and had updated their plan to address many of these concerns. As noted above, we did not have time to review this update in this report because we did not receive the plan in time. For details of our assessment of the design of the six grants test models, see appendix II. Based on our review of the working draft plan for the procurement portion of the pilot dated November 2015, related documents, and interviews with cognizant staff, we found that the design did not reflect leading practices for pilot design. Further, while the plan included some information regarding responsibilities of stakeholders involved in the procurement portion of the pilot, specific roles and deliverables were not clearly described for all phases of the pilot. For example, the written draft plan listed broad areas of responsibilities—such as “manage funding” or “Federal Register Notice”—but did not detail what stakeholders would be working on related to those activities. OFPP staff described additional actions to supplement the information contained in the draft plan. This information included their decision to initially focus the design of the procurement pilot on testing the feasibility of centralizing certified payroll reporting by contractors subject to the Davis-Bacon and related acts because of public feedback on the need to reduce duplicate reporting. However, even after taking this additional information into account, we found that the design was neither well- developed nor documented in accordance with leading practices to allow for the development of effective recommendations to simplify reporting for contractors, as described below. Leading Practice 1: Establish Well-Defined, Appropriate, Clear, and Measurable Objectives. The working draft plan provided by OFPP does not include specifics pertaining to the proposed focus of certified payroll reporting. OFPP staff told us that they believe submitting certified payroll information through a centralized portal would reduce contractor reporting burden. They explained that this topic was selected because they learned that it was a particular pain point for contractors as a result of various outreach efforts including 18F’s discovery process. The draft plan also does not provide specifics regarding the particular objectives and hypothesis that will be tested by the pilot. OFPP staff stated that, consistent with their view of agile practices, they intend to further refine their approach as 18F develops its prototype and additional work proceeds with the pilot. Leading Practice 2: Clearly Articulate an Assessment Methodology. The draft plan we reviewed did not include detailed information on the methodology, strategy, or types of data planned to be collected. The draft plan referenced an information-gathering effort conducted by GSA’s 18F to discover challenges and develop recommendations for burden reduction. However, OFPP staff could not provide any evidence that this effort resulted in specific methodologies or data-collection strategies related to centralizing certified payroll reporting. According to 18F staff, a second phase of the procurement portion of the pilot will begin in March 2016. OFPP staff said that during this phase, 18F will research, design, and test a prototype that will become a basis for the centralized portal that will be tested under the pilot. This prototype will be vetted in workshops with stakeholders who will test, among other things, the metrics, functionality, and accessibility of the prototype and any needed changes. 18F expects the second phase to be completed by May 2016, after which OFPP will begin the third phase of the pilot later this summer. In that phase, a contractor will develop a centralized portal based on 18F’s design that could be used to test the submission and review of certified payroll data. Additionally, OFPP staff told us that they intended to collect data in accordance with FAR requirements and would compare the information collected in the portal with that being submitted through other methods. However, OFPP was not able to provide specific details on its pilot methodology, such as how it intends to compare results of contractors that use the prototype and those that do not, identify the type and source of data necessary to evaluate the pilot, and establish the timing and frequency of the data to be collected. Without these details, the procurement methodology design does not address all components of a pilot program nor does it include key design features that would meet leading practices. Leading Practice 3: Ensure Scalability of Pilot Design. The draft design of the procurement portion of the pilot that we reviewed did not address the issue of scalability or efforts to ensure that conclusions and recommendations resulting from the pilot could be applied government- wide. However, OFPP staff indicated that they plan to develop a sampling approach that will allow them to collect data from a population that is representative of federal contractors. Specifically, they said that they will select a diverse group of participants by potentially pulling data from the Federal Procurement Data System-Next Generation. Using that database, they expect to be able to select a range of small and large contractors that are required to report certified payroll under Davis-Bacon and related acts. However, without documentation providing details of a sampling methodology, measures, and a data analysis plan, the design cannot ensure the scalability of the results or findings from the pilot. Leading Practice 4: Develop Plan to Evaluate Pilot Results. The draft procurement plan does not indicate how data will be evaluated to track program performance, how final results will be evaluated, or conclusions drawn. OFPP staff told us that although they believe it is early in the process to have finalized evaluation plans, they are considering a number of options for evaluating whether a centralized certified payroll portal would cost more or less than current reporting approaches. Specifically, they said that they expect to have some quantifiable data to allow for straightforward analysis and will evaluate the qualitative data from the certified payroll portal as well as the National Dialogue. However, the absence of a detailed data analysis plan suggests that OFPP lacks a sound approach to evaluate pilot results. Leading Practice 5: Ensure Appropriate Two-Way Stakeholder Communication. OFPP’s plans for obtaining stakeholder input and fostering two-way dialogue have not yet been developed to engage public participation and feedback on its approach for designing and implementing the procurement portion of the pilot. Similar to the approach taken by HHS, OFPP staff told us that they used comments posted on the National Dialogue to inform the design of the procurement portion. However, as previously mentioned, we have concerns about the usefulness of that approach because none of the three comments they received on the dialogue were related to certified payroll. OFPP staff said they also used comments posted on the 2014 open dialogue on improving procurement processes to inform their pilot design. From commentary posted on both sites, OFPP identified certified payroll reporting as a pain point that could be further explored through the pilot project. OFPP staff told us that they engaged GSA’s 18F to conduct the discovery phase of the pilot design to better understand areas of significant reporting burden related to certified payroll with a select group of stakeholders that included contractors, federal agency officials, and contracting officers. A Federal Register notice was also issued on November 24, 2015 to solicit public comments on the reporting burden of the procurement portion of the pilot under the Paperwork Reduction Act. Although OFPP obtained stakeholder input to identify areas of focus for the design of the procurement portion of the pilot, it has not engaged them to solicit input on other stages of the pilot, including design, implementation, data gathering, and assessment. Further, OFPP has not released specific information about the design of the pilot, nor has it made information about pilot participation available to stakeholders despite repeated requests for information from those participating in monthly calls hosted by the Association for Government Accountants and Treasury. In addition to being a leading practice for pilot design, our previous work examining grants management streamlining initiatives found that stakeholder communication is not just “pushing the message out,” but should also facilitate a two-way, honest exchange and allow for feedback from relevant stakeholders. We found that a lack of opportunities to provide timely feedback resulted in poor implementation and prioritization of streamlining initiatives and limited recipients’ use and understanding of new systems. As such, it will be important for OFPP to engage with the procurement community on its pilot design so that it can be improved based on public input. In addition, more effective two-way communications could also be a strategy for recruiting participants for the procurement portion of the pilot. In crafting the DATA Act, Congress sought to reduce the burden and cost of reporting for the recipients of federal funds. Toward that end, OMB, partnering with other federal agencies, has taken steps to design the Section 5 Pilot that will explore potential ways to reduce the burden and cost of reporting on federal funds for both the federal grantee and procurement communities. However, we found uneven progress in the grants and procurement portions of the pilot. OMB and HHS have made considerable progress designing an overall approach that will examine a variety of potential ways to simplify reporting for grant recipients. In addition to generally being on track to meet the specific requirements set out in the act, we found that the proposed design of the grants portion of the pilot partially adheres to leading practices. In contrast, our review of the design of the procurement portion of the pilot raises several concerns. In the absence of a detailed design and risk management plans for executing the pilot moving forward, it is unclear how the design of the procurement portion will reflect the requirements set forth by section 5 of the act. Because of project delays to date, it will be especially important for OMB to communicate to Congress and interested stakeholders how it plans to address key aspects of these requirements, such as the collection of meaningful and useful data over a 12-month reporting cycle and including a diverse group of participants with federal contracts totaling from $1 billion to $2 billion. Moreover, the design we reviewed for the procurement portion of the pilot did not reflect leading practices to allow for the development of effective recommendations to simplify reporting for contractors. Moving forward, given the tight timelines set out in the act, it will be important for OMB to redouble its focus on the design and implementation of the procurement portion. Without a sound design that applies leading practices, the recommendations to Congress for reducing reporting burden for contractors coming out of this effort may be late, of limited use, or incomplete. 1. To help ensure and more clearly convey how the procurement portion of the pilot will contribute to meeting the Section 5 Pilot design requirements, we recommend that the Director of OMB determine and clearly document (1) how it will collect certified payroll data over a 12-month reporting cycle, (2) ensure the diversity of pilot participants, and (3) how the inclusion of federal contracts will contribute to an aggregate amount of $1 billion to $2 billion. 2. To enable the development of effective recommendations for reducing reporting burden for contractors, the Director of OMB should ensure that the procurement portion of the pilot reflects leading practices for pilot design. We provided a draft of this report to OMB, HHS, and GSA for review and comment. OMB and HHS provided technical comments that we have incorporated throughout the report, as appropriate. OMB and HHS did not offer a view on our recommendations. GSA did not have any comments. We are sending copies of this report to the Director of OMB, Secretary of HHS, Administrator of GSA, and appropriate congressional addressees. In addition, the report is available at no charge on the GAO website at https://www.gao.gov. If you or your staff have any questions about this report, please contact me on (202) 512-6806 or by email at sagerm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This review (1) describes the administration’s approach to the Section 5 Pilot; (2) assesses whether current activities and plans will likely allow the Office of Management and Budget (OMB) and its partners to meet requirements and time frames established under the Section 5 Pilot; and (3) evaluates the extent to which the design for the pilot is consistent with leading practices. To describe the administration’s approach to the pilot, we assessed documents related to pilot activities and interviewed OMB, Department of Health and Human Services (HHS), and General Services Administration (GSA) officials and staff responsible for implementing the Section 5 Pilot. Specifically, we reviewed documentation from HHS and OMB’s Office of Federal Procurement Policy (OFPP). Our reviews were based on the latest design plans available at the time. We also interviewed officials from organizations representing key non-federal stakeholders including state and local governments, private-sector contractors, and other federal fund recipients. To assess whether the Section 5 Pilot design would be likely to meet the statutory design requirements, we reviewed section 5 of the Federal Funding Accountability and Transparency Act of 2006, as added by the Digital Accountability and Transparency Act of 2014 (DATA Act) to understand the deadlines and design requirements. We reviewed the draft design documents to assess OMB and its partners’ plans for meeting these requirements. To supplement our review of those plans, we also spoke with cognizant staff implementing these pilots at OMB, HHS, and GSA. To identify and analyze leading practices for pilot design, we reviewed our past work evaluating and assessing pilots. Additionally, we also relied on our technical guidance on designing evaluations. Further, we reviewed relevant studies from academia as well as other entities, such as the Brookings Institution and the Federal Demonstration Partnership. We reviewed reports from organizations that have expertise on conducting pilot programs and experience in scaling pilot results that could be applied government-wide. We also shared these leading practices with the agencies in this review during our audit work. To assess the extent to which the Section 5 Pilot design adhered to these leading practices, we reviewed documented designs and plans for both the grants and procurement portions of the pilot. To evaluate the grants portion of the pilot, we focused on a draft design document from November 2015. HHS officials told us that they have updated that plan. Because we did not receive this update until the end of March 2016, we did not have time to include its content for this report. As such, our assessment is based on the November 2015 plan. We intend to review the updated plan as we continue our work on DATA Act implementation. We have supplemented our assessment with information HHS officials provided to us during subsequent interviews, as appropriate. For the procurement portion, we reviewed a working draft plan from November 2015. While it is unclear whether there has been an updated version, we have also provided additional details from discussions with OFPP officials, as appropriate. To evaluate the grants and procurement portions of the pilot, we applied the five leading practices we identified to OMB and HHS’s design documents. Each of those assessments were subsequently verified by another individual. We determined that the design met the criteria when we saw evidence that all aspects of a leading practice were met. When we were unable to assess whether all aspects of a leading practice were met without additional information, we determined that the design partially met the criteria. Finally, when we saw no evidence of a leading practice, we determined that the criteria was not met. In continuation of our constructive engagement approach on the DATA Act for working with agencies implementing the act, we provided HHS and OMB with feedback on the design of the grants and procurement portions of the pilot during our review. These officials generally accepted our feedback and, in some instances, noted that they have or would make changes to their design as a result of our input. We conducted this performance audit from May 2015 to April 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides detailed information regarding our assessment of the pilot design for the grants portion of the Section 5 Pilot. We assessed each of the Department of Health and Human Services’s (HHS) six test models against the five leading practices for pilot design described in the report. Using HHS’s November 2015 design plans and relevant supporting information available during the preparation of this report, we determined whether each test model met, partially met, or did not meet those leading practices. In addition to the contact named above, J. Christopher Mihm (Managing Director), Peter Del Toro (Assistant Director), Shirley Hwang (analyst-in- charge), Aaron Colsher, Kathleen Drennan, Jason Lyuke, Kiran Sreepada, and David Watsula made major contributions to this report. Other key contributors include Lisette Baylor, Brandon Booth, Jenny Chanley, Robert Gebhart, Donna Miller, Carl Ramirez, Andrew J. Stephens, and Tatiana Winger. Additional members of GAO’s DATA Act Internal Working Group also contributed to the development of this report.
The DATA Act directs OMB or a designated federal agency to establish a pilot program to develop recommendations for simplifying federal award reporting for grants and contracts. The grants portion will test six ways to reduce recipient reporting burden while the procurement portion will initially focus on centralizing contractor reporting of certified payroll. The act requires GAO to review DATA Act implementation as it proceeds. This report (1) describes OMB's approach to the DATA Act pilot requirements, (2) assesses whether current plans and activities will likely allow OMB and its partners to meet the requirements under the act, and (3) evaluates the extent to which designs for the grants and procurement portions of the pilot are consistent with leading practices. GAO reviewed available pilot documentation; assessed them against leading practices for pilot design; and interviewed staff at OMB, HHS, and GSA, as well as groups representing recipients of federal grants and contracts. GAO will conduct a follow-on review focused on OMB's implementation of its pilot designs. As required by the Digital Accountability and Transparency Act of 2014 (DATA Act), the Office of Management and Budget (OMB) is conducting a pilot program, known as the Section 5 Pilot, aimed at developing recommendations for reducing recipient reporting burden for grantees and contractors. OMB partnered with the Department of Health and Human Services (HHS) to design and implement the grants portion of the pilot, and with the General Services Administration (GSA) to implement the procurement portion. OMB launched the Section 5 Pilot in May 2015 and expects to continue pilot-related activities until at least May 2017. If implemented according to HHS's proposed plan, the grants portion of the pilot will likely meet the requirements established under the act. In contrast, GAO has concerns with how the procurement portion of the pilot will contribute to the Section 5 Pilot's design requirements. For example, OMB has not fully described how it will select pilot participants that will result in a diverse group of contractors as required by the act. OMB staff stated that they intend to select participants for testing the procurement pilot by using a nongeneralizable sample of contractor data, but they have not provided a detailed, documented sampling plan. The design of the grants portion of the pilot partially adhered to leading practices. Although five out of the six grants test models had clear and measurable objectives, only one had specific details about how potential findings could be scalable to be generalizable beyond the context of the pilot. HHS officials said they have updated their plan to address these concerns but that plan was not provided in time to allow GAO to analyze it for this review. The design of the procurement portion of the pilot did not reflect leading practices. For example, the plan did not include specific information on the methodology, strategy, or types of data to be collected. Further, the plan we reviewed did not address the extent to which the proposed pilot approach would be scalable to produce recommendations that could be applied government-wide. The design also did not indicate how data will be evaluated to draw conclusions. Finally, while OMB has solicited general comments related to contractor reporting pain points, it has not released specific details on the design to stakeholders despite their repeated requests for that information. GAO recommends that OMB (1) clearly document how the procurement portion of the pilot will contribute to the design requirements under the DATA Act and (2) ensure that the design of the procurement portion of the pilot reflects leading practices. OMB, HHS, and GSA did not comment on our recommendations. GAO incorporated technical comments from OMB and HHS where appropriate.
Section 6041A of the Internal Revenue Code requires any service recipient, including federal agencies, to file an annual information return with IRS for payments made to any person for services totaling $600 or more during a calendar year. Payments to corporations for certain services provided must also be reported, such as attorneys’ fees and medical and health care payments. In addition, federal executive agencies must report all payments for services provided by vendors, including payments made to corporations. Specific information required on the annual information return—an IRS Form 1099 MISC—includes the name, address, and TIN of both the payer and payee, as well as the total amounts paid during the year for the various types of services provided. The purpose of the Form 1099 MISC filing requirement is to enable IRS to identify taxpayers who fail to file an income tax return as well as those who fail to report all of their income on their tax return for the related year. IRS enters Form 1099 MISC information in both a Payer Master File (PMF) and an Information Returns Master File (IRMF). The PMF is a database that includes all entities that make payments subject to information return reporting. The PMF includes general information on the total number and dollar value of information returns, including Forms 1099 MISC, filed by each payer for each year. The IRMF is a database that includes specific information on the type and amount of payments made to each payee, including whether the payee TIN was valid upon receipt of the information return and if the TIN was invalid, whether it was subsequently corrected by IRS. Both the PMF and IRMF include the payer’s TIN. Upon receipt of a Form 1099 MISC, basic information is entered into a temporary IRS database. IRS compares the payee TIN/name combination with TIN/name combinations in its records to determine if there is a match. If there is a match, the information is entered in the IRMF without the need for additional action. If there is not a match, IRS will try to validate the TIN/name combination via a TIN “validation” process, which entails matching the TIN and name control—the first four characters of an individual’s last name or the first four characters of a business name—on the Form 1099 MISC with (1) a file which contains all social security numbers ever issued and all name controls ever associated with them and (2) a file that contains all employer identification numbers ever issued and all name controls ever associated with them. If IRS is able to match the TIN and name control through this process, the information is entered in the IRMF with a code indicating that the TIN was corrected and is valid. If IRS is unable to match the TIN and name control, the information is entered in the IRMF with a code indicating that the TIN is invalid. If the vendor TIN included on the Form 1099 MISC is initially valid or subsequently corrected by IRS, and the vendor files a tax return for the corresponding year, IRS can electronically match the TIN, name control, and amount entered in the IRMF with the amount reported on the vendor’s tax return via the Document Matching Program. This enables IRS to determine whether the vendor has reported all of the income on the tax return. Alternatively, if there is no corresponding return with the same TIN and name control as that entered in the IRMF, IRS can determine that the vendor is a potential nonfiler. However, if the TIN entered in the IRMF is invalid, IRS is unable to use the information to detect either underreporting or nonfiling on the part of a vendor. Since 1997, IRS has had a TIN-matching program that federal agencies can use to verify the accuracy of TIN/name combinations furnished by federal payees. This program was intended to reduce the number of notices of incorrect TIN/name combinations issued for backup withholding by allowing agencies the opportunity to identify TIN and name discrepancies and to contact payees for corrected information before issuing an annual information return, such as a Form 1099 MISC. Monthly, federal agencies can submit a batch of TIN/name combinations to IRS for verification. IRS then matches each record submitted and informs the agency whether the TIN and name combination submitted matches its records. In order to encourage vendors to provide a valid TIN and to ensure that taxes are paid when they do not, Internal Revenue Code Section 3406 requires payers, including federal agencies, to initiate backup withholding of a federal payment if a payee, including a vendor, fails to provide a TIN or provides an invalid TIN, and upon notice fails to provide a correct TIN. IRS considers a TIN to be missing if it is not provided, has more or less than nine numbers, or has an alpha character in one of the nine positions. IRS considers the TIN to be invalid if it is in the proper format, but the TIN/name combination doesn’t match or cannot be found in IRS or Social Security Administration files. Payments subject to backup withholding include various types of income reportable on a Form 1099 MISC, including compensation paid to individuals that are not employees. The current rate for backup withholding is 30 percent of the payment. Federal agencies are not always adhering to Form 1099 MISC filing requirements. For 2000 and 2001, about 152,000 information returns for federal payments totaling about $5 billion were not filed with IRS, while about 170,000 information returns, including $20 billion in federal payments that were filed, included invalid TINs. Few agencies are taking advantage of IRS’s TIN-matching program to validate vendor TINs prior to submitting information returns to IRS. Similarly, few agencies are initiating backup withholding on payments made to vendors that have provided invalid TINs. While most federal agencies filed information returns for vendors, some did not. For both 2000 and 2001, the 14 federal departments collectively filed over 600,000 Forms 1099 MISC in which they reported over $100 billion in payments each year. (See app. II for the number and dollar value of Forms 1099 MISC filed individually by the 14 federal departments.) Although the 14 federal departments collectively filed a substantial number of Forms 1099 MISC over this 2-year period, we found some significant exceptions, as the following examples illustrate. About $5 billion in payments to about 152,000 payees made collectively by the Departments of Agriculture, Commerce, and Justice for 2000 and 2001 combined were not reported to IRS on Forms 1099 MISC. About 8,800 of these payees who collectively received payments totaling about $421 million dollars—an average of about 48,000 each—failed to file an income tax return for these 2 years, according to IRS’s records. If information returns had been filed and IRS had this information, it would have provided a basis for IRS to assess the appropriate taxes against these payees. Almost $3.0 billion in payments made via purchase cards by DOD between 2000 and 2001 had not been reported to IRS due to incorrect or missing vendor TINs. DOD officials indicated that obtaining vendor information needed for Forms 1099 MISC from payment card companies has been a long-standing problem. They estimated that they could have filed as many as 40,000 additional Forms 1099 MISC for 2000 and 2001 if they had received the necessary vendor information from payment card companies. According to the Department of Transportation, Forms 1099 MISC were not filed for services if the vendor was a corporation that provided both goods and services, as their vendor payment system cannot distinguish between the two for the purpose of issuing Forms 1099 MISC. As a result, only about $8 million of $92 million in service payments for tax years 2000 and 2001 were reported to IRS on Forms 1099 MISC. One Department of Housing and Urban Development agency that made payments to vendors for services totaling over $73 million for 2000 and 2001, failed to file any Forms 1099 MISC for these 2 years. According to a Department of Housing and Urban Development official, because the agency is a wholly owned corporation within HUD and is therefore quasi-federal, agency officials were not aware that they were required to file Forms 1099 MISC. They further indicated that the agency had subsequently issued Forms 1099 MISC to its vendors for payments made for 2002. In response to our survey of departmental policies and practices for filing Forms 1099 MISC, department officials cited various reasons for not filing a Form 1099 MISC for vendor payments. Not having a valid vendor TIN was the foremost reason cited. Other reasons included the inability to distinguish between goods and services provided by a vendor, as cited above, and problems obtaining necessary vendor information, namely TINs, from payment card companies for vendors that are paid via government purchase cards. Even when federal agencies do file Forms 1099 MISC, they often include an invalid vendor TIN. As a result, IRS has to expend resources in an attempt to identify a correct TIN via its TIN validation process and, in most cases, IRS is unable to use the information returns to determine whether vendors had either underreported their income or failed to file a tax return. As shown in figure 1, the 14 federal departments filed almost 170,000 Forms 1099 MISC with invalid vendor TINs for tax years 2000 and 2001 combined. Almost $20 billion in vendor payments were included on these information returns. Overall, for the 2 years combined, about 13 percent of all Forms 1099 MISC filed by the 14 federal departments included an invalid TIN when they were submitted to IRS. (See app. III for the number and percentage of Forms 1099 MISC filed individually by the 14 federal departments with invalid TINs.) As also shown in figure 1, IRS was subsequently able to correct about 32 percent of the invalid vendor TINs through its TIN validation process. However, IRS was unable to correct the invalid TINS included on about 116,000 of the Forms 1099 MISC filed by the 14 departments, which were valued at almost $9 billion, an average of about $77,000 per return. As a result, IRS would be unable to match this income with income reported on income tax returns for the same period to determine whether these vendors had either underreported the income or failed to file a tax return. One reason cited by department officials for filing Forms 1099 MISC with invalid TINs was the lack of a means for validating vendor TINs. This was cited, in particular, by those departments whose agencies were not using IRS’s existing TIN-matching program. In addition to negatively affecting IRS’s ability to ensure that vendors report all required income on their tax returns, invalid vendor TINs also impede the Department of the Treasury’s ability to offset federal tax debts through the Federal Payment Levy Program, as well as its ability to offset other debts through the Treasury Offset Program. Each program requires a match of the payee’s TIN and name control on both the payment record submitted to the Financial Management Service (FMS) and the debt information included in the FMS database, in order for the payment to be offset against the debt. Although the TIN-matching program is available, most federal agencies do not consistently use this program to ensure that the TINs included on information returns are valid. From our survey of federal department policies and practices for obtaining vendor TINs and filing required Forms 1099 MISC we found the following. Officials from only 2 of the 14 federal departments—Labor and Housing and Urban Development—said their agencies were currently using IRS’s TIN-matching program departmentwide. Even so, we noted that according to IRS’s records, agencies within both departments had filed some Forms 1099 MISC for tax years 2000 and 2001 with invalid vendor TINs. Three other federal departments—Health and Human Services, Interior, and Justice—indicated that IRS’s TIN--matching program is used, but only by some of the agencies or bureaus within the respective departments. While officials from some federal departments said they were unaware of the TIN-matching program, others thought the program was currently unavailable. DOD officials stated that they rely on the CCR for validating vendor TINs and thus do not use the IRS TIN-matching program. A Department of the Interior official indicated that it is in the process of implementing use of the CCR by its bureaus and agencies as of October 2003 at the direction of OMB. Although backup withholding is required if vendors fail to provide a valid TIN to a federal payer, most federal agencies do not initiate backup withholding. From our survey of federal department policies and practices for obtaining vendor TINs and filing required Forms 1099 MISC we found the following. Officials from only 2 of the 14 federal departments—Energy and Transportation—said that their agencies initiate backup withholding departmentwide. Three other federal departments—Health and Human Services, Interior, and Justice—indicated that backup withholding is initiated only by some of the agencies or bureaus within the respective departments. The main reason cited by officials from several of the federal departments for not initiating backup withholding was the lack of a process in place within their respective financial management systems for accomplishing backup withholding of vendor payments. Some department officials also indicated that they had no way of knowing when a vendor’s TIN is invalid and therefore subject to backup withholding. An official with one of the agencies within the Department of Health and Human Services indicated that they deny payment to vendors who fail to provide a valid TIN in lieu of backup withholding. IRS has taken some recent actions and has other actions planned to assist federal agencies in complying with Forms 1099 MISC filing requirements, as the following examples illustrate. In August 2003, for the first time, IRS sent a specific notice (Notice 1313) to federal agencies identifying Forms 1099 MISC filed for 2001 in which the vendor’s TIN was invalid and reminding the agencies of their responsibility to ensure that TINs are valid and to initiate backup withholding for any vendors who subsequently fail to provide the agency with a correct TIN upon notification by the agency. Sending these notices annually may address agency concerns about not having a way to determine that a vendor’s TIN is invalid and that backup withholding should be initiated. By the end of 2003, IRS plans to expand its TIN-matching program to enable federal agencies to submit online up to 100,000 TIN/name combinations at a time and to receive a response from IRS within 24 hours concerning whether the TIN/name combinations submitted match the TIN/name combinations in IRS’s records. As an interim step, IRS plans to have an interactive computer application available that will allow federal agencies to submit up to 25 TIN/name combinations and receive feedback within 5 seconds on whether these match the TIN/name combinations in IRS’s records. As with the existing TIN- matching program, IRS will not be able to provide an agency with the correct TIN or name if they do not match IRS’s records due to the disclosure laws. Instead, the agencies will continue to be responsible for contacting a vendor for the correct TIN/name combination. However, the online TIN-matching program should make it easier for federal agencies to identify vendors that are to be contacted to obtain a valid TIN and thus prevent the agencies from filing Forms 1099 MISC that include invalid TINs. In February 2003 IRS issued a proposed revenue procedure that would enable payment card companies to act on behalf of cardholders/payers, such as federal agencies, in soliciting, collecting, and validating vendor information, including TINs. This procedure would enable payment card companies to use IRS’s TIN-matching program to validate the TIN/name combinations provided by vendors for which a Form 1099 MISC is to be filed. Once adopted, this procedure may help to eliminate some of the problems agencies have experienced in getting necessary vendor information related to purchase card payments. In addition, IRS has initiated meetings with various federal agencies, including the Departments of Defense and Agriculture, to identify specific problems associated with obtaining valid vendor TINs and filing accurate Forms 1099 MISC, particularly problems related to purchase card payments. In November 2003, IRS plans to present a federal agency seminar covering various topics related to filing Forms 1099 MISC, including use of the TIN-matching program, information reporting requirements, and the previously mentioned proposed revenue procedure. Although IRS can identify whether Forms 1099 MISC filed by federal agencies include a valid TIN, IRS does not have a program to identify and follow up with agencies that fail to file Forms 1099 MISC. In addition, the CCR does not, as OMB intends, serve as a central source of valid TIN data that federal agencies can use. IRS does not have a program to periodically identify and follow up with federal agencies that fail to file Forms 1099 MISC for vendor payments. IRS officials indicated that their emphasis has been on identifying Forms 1099 MISC filed with invalid TINs by nonfederal payers. This is because Internal Revenue Code section 6721 authorizes IRS to assess a penalty of $50 against a nonfederal payer for each information return filed with an invalid TIN, up to a maximum penalty of $250,000 per calendar year. IRS proposed just over $204 million in penalties against nonfederal payers for information returns with invalid TINs for tax years 2000 and 2001 combined. IRS estimated that an additional $6.9 million in penalties could have been proposed against federal agencies for filing information returns with invalid TINs, if IRS had the authority to do so. A complete and accurate Payer Master File, which includes general payer information, such as the payer name and TIN, as well as the total number and dollar value of various types of Forms 1099 filed by each payer, would enable IRS to identify federal agencies that fail to file Forms 1099 MISC. IRS could then contact these agencies to ascertain why these returns were not filed. IRS initially indicated to us that federal payers are specifically coded as such in the Payer Master File to distinguish them from nonfederal payers. However, we found that 96 of 147 federal agencies and bureaus for which we needed information concerning Forms 1099 MISC they filed with IRS for 2000 and 2001 were not coded as federal payers in the Payer Master File. IRS officials agreed that there is a need to update the Payer Master File to ensure that all federal payers are properly coded as federal. Conducting a survey of all payers included in this file would be a way for IRS to update this information, thus ensuring that all federal payers are correctly coded as federal in the Payer Master File. OMB has instructed federal agencies to begin using the CCR as of October 2003, as the single validated source of information about vendors doing business with the federal government, but CCR vendor TINs are not validated with IRS’s TIN-matching program. The CCR, which is maintained by DOD, includes information on over 234,000 vendors that have registered to do business with DOD, including the vendors’ TIN and name. The accuracy and completeness of information listed in this database is the responsibility of the individual vendors and must be updated annually. According to CCR officials, vendor TINs are not validated via IRS’s TIN- matching program. Instead, CCR does an edit check to ensure that a vendor’s TIN is in the correct format, namely that it contains nine numbers. At the time of our review and resulting July 2001 report mentioned earlier, we found that there were a substantial number of invalid vendor TINs in the CCR. In addition, during our current review, we found that the CCR included about 7,000 vendor employer identification numbers that were not included in IRS’s Business Master File. Due to the lack of validated TINs in the CCR, agencies’ use of this centralized database as a source of TINs for vendors in and of itself would not ensure that the agencies include valid TINs on Forms 1099 MISC submitted to IRS. As noted earlier, in line with OMB’s expectations, DOD relies on the CCR as a source of valid TINs and therefore does not use IRS’s TIN-matching program; Interior officials say they also plan to use the CCR. If the name and TIN of vendors recorded in the CCR were validated by DOD initially and then periodically thereafter through IRS’s TIN-matching program, the CCR could become a central source of valid vendor TINs for all agencies to use for their Forms 1099 MISC submitted to IRS. However, because agencies are restricted to using the TIN-matching program only for validating TINs for which an information return is required, DOD would not be able to validate all vendor TINs included in the CCR because not all vendors in the CCR actually receive DOD contracts to provide services. This restriction could be addressed through a change to the disclosure laws, thus authorizing DOD to use the TIN-matching program for all vendors that have registered with the CCR. Alternatively, individual vendors could be asked to agree to have their TIN and name matched to IRS data when they apply to do business with the government. Section 6103 of the Internal Revenue Code protects taxpayer information, including TINs, from disclosure. However, taxpayers can waive this protection. This would enable IRS to provide more information than can currently be provided under the TIN-matching program, such as the correct TIN/name combination. Given that the CCR is not currently a valid source of vendor TINs, agencies cannot rely on the CCR as OMB intends. Therefore, each agency would need to use IRS’s online TIN-matching program as the only way to independently verify vendor TINs necessary to include on their Forms 1099 MISC. However, at the present time agency use of the TIN-matching program is optional. Some federal agencies’ failure to file required annual Forms 1099 MISC and other agencies’ failure to file returns with valid vendor TINs adversely affects IRS’s efforts to detect unreported vendor income and vendors that fail to file income tax returns. In addition, invalid TINs in federal agency payment records negatively affect Department of the Treasury efforts to offset federal tax debts through the Federal Payment Levy Program and other federal debts through the Treasury Offset Program. Although IRS has taken some positive actions to improve federal agency compliance with Form 1099 MISC filing requirements, additional steps could be taken. IRS could identify and follow up with federal agencies that fail to file required Forms 1099 MISC if it had a complete and accurate Payer Master File. In addition, the CCR could become, as OMB intends, a central source for valid vendor information, including TINs. Currently, CCR TIN data are not always accurate. Except for current statutory restrictions on the use of IRS’s TIN-matching program, the CCR’s administrator, DOD, could use IRS’s new online TIN-matching program to routinely verify the TINs of all vendors as they are added to the CCR and then periodically thereafter. This would carry out OMB’s desire for the CCR to be a central source of valid vendor information and would thereby avoid each agency independently verifying TINs for some of the same vendors. Asking vendors to permit DOD to routinely verify their TINs when they register to do business with the federal government would be one option to enable DOD to verify TINs in the CCR against IRS’s records. Alternatively, OMB and IRS could determine whether an exception to section 6103 of the Internal Revenue Code should be requested. In the absence of the CCR as a valid source of TINs, agencies must individually and voluntarily use IRS’s TIN-matching program to validate vendor TINs. Agencies have not consistently used the TIN-matching program in large part because they say they were unaware of it. IRS’s new online TIN-matching program and publicity IRS plans as it launches the new system later this year may make officials more aware of the program and increase their use of it. However, until OMB is able to realize its intent of having the CCR be a valid source of information for federal vendors, additional assurance could be gained that agencies would use the matching program if OMB required them to do so. Because IRS has made the system available online, fulfilling such a requirement should now be easier than in the past. To ensure that federal agencies file Forms 1099 MISC for payments to vendors for services provided, we recommend that the Commissioner of Internal Revenue ensure the accuracy of identification information concerning federal payers in IRS’s Payer Master File and develop a program to periodically identify federal agencies that fail to file Forms 1099 MISC and follow up to determine why the forms were not filed. To minimize duplicate agency effort in validating vendors’ TINs and to reinforce the anticipated role of the Central Contractor Registration as the single validated source of vendors doing business with the federal government, we also recommend that the Commissioner of Internal Revenue and the Director of the Office of Management and Budget consider options to routinely validate all vendor TINs in the CCR and to then require all agencies to use vendor and TIN information from the CCR for their Forms 1099 MISC. If this proves to be infeasible, OMB should require each agency to validate TINs for vendors who provide services through IRS’s TIN-matching program. We received written comments on a draft of this report from the Commissioner of Internal Revenue (see app. IV) and the Under Secretary of Defense (Comptroller) (see app. V). The Commissioner agreed with our recommendations. However, he also emphasized that agencies may not wish to spend the resources to effectively use IRS’s TIN-matching program and that IRS cannot compel agencies to meet their Form 1099 MISC reporting responsibilities. To ensure the accuracy of identification information concerning federal payers included in IRS’s Payer Master File, the Commissioner agreed to perform periodic reviews of the database to ensure its accuracy. In addition, as part of an education-compliance program geared to federal agencies that IRS is in the process of developing, the Commissioner stated that IRS plans to contact federal agencies to identify and verify that all TINs used by each agency have been properly identified, thus compiling an accurate list of all federal agency payer TINs. To identify federal agencies that fail to file Forms 1099 MISC and the reasons why the forms were not filed, the Commissioner agreed to compare the above mentioned list of federal agency payer TINs to the Payer Master File to identify agencies that did not file Forms 1099 MISC and to then contact the agencies to determine if Forms 1099 MISC were required. To minimize duplicate agency effort in validating vendors’ TINs and to reinforce the role of the CCR as the single validated source of vendors doing business with the federal government, the Commissioner agreed that IRS will work with DOD to ensure that vendor TINs on the CCR are accurate, to include exploring the expanded use of the TIN-matching program to validate all TINs included in the CCR. In addition to commenting on the report recommendations, the Commissioner pointed out that IRS Policy Statement P-2-4, which provides that federal agencies are not subject to penalties and interest for failure to comply with Form 1099 MISC filing requirements, is based on a 1978 GAO Comptroller General Decision (B-161457). This decision states that agency appropriations are not available for payment of interest and penalties. The Commissioner noted that if an agency does not wish or is unable to comply with its Form 1099 MISC reporting responsibilities, there is nothing that IRS can do but rely on voluntary compliance on the part of the agency. Although we agree that IRS cannot compel agencies to meet their Form 1099 MISC reporting responsibilities, we believe implementing our recommendations will better ensure that agencies do so. For instance, by bringing to agencies’ attention that they are not filing the required information returns, IRS can help educate agencies about their reporting responsibilities. Further, by improving the validity of TINs in the CCR, IRS, working with OMB, can make it easier for agencies to comply. The Commissioner also stated that a number of federal agencies indicated that they have been unable to use IRS’s TIN-matching program because their financial reporting systems were incompatible with the TIN-matching program and that the cost to make agency systems compatible would be prohibitive. If IRS and DOD are able to arrange validation of TINs included in the CCR via the TIN-matching program as we recommend, this would eliminate the need for individual agencies to use the TIN-matching program. In the event that IRS and DOD are unable to work out such an arrangement, IRS’s online TIN-matching program, which can be accessed via the Internet using a desk-top workstation, may be an effective alternative to agencies making substantial changes to their financial accounting systems. The Under Secretary of Defense (Comptroller) did not directly say he agreed with our recommendation, but indicated that efforts currently underway to improve the accuracy of TINs in the CCR for both DOD and all other federal agencies mirror our recommendation for IRS and OMB to consider options to routinely validate all vendor TINs in the CCR and to require all agencies to use vendor and TIN information from the CCR for their Forms 1099. The Under Secretary pointed out that the mandated use of the CCR throughout the federal government coupled with IRS’s online TIN-matching program should enable DOD to establish a basic level of validation in the near term, perhaps as soon as the second quarter of fiscal year 2004. The Under Secretary also pointed out that the Defense Finance and Accounting Service has been working with payment card companies, such as VISA and Master Card, to improve the process for reporting payments made via payment cards. As a result, DFAS expects a significant increase in the number and dollars reported for the card programs on Forms 1099 for calendar year 2003. We commend these efforts to address the long- standing problem of obtaining necessary vendor information from payment card companies. Coupled with IRS’s proposal to enable payment card companies to act on behalf of cardholder/payers in soliciting, collecting, and validating vendor information, including TINs, these efforts should go a long way in addressing this problem. On November 26, 2003, we also received oral comments from representatives of OMB's Offices of Federal Procurement Policy and Federal Financial Management. OMB generally agreed with our recommendation. Accordingly, OMB agreed to develop and issue a memorandum to federal agencies directing them to validate TINs by using the TIN-matching program or the CCR. In addition, OMB agreed to work with IRS and the CCR to ensure agencies are provided the necessary information to use either of the methods recommended. Although we believe that either using IRS’s TIN-matching program or validating TINs in the CCR can be effective means for ensuring that agencies include valid TINs on their Forms 1099 MISC, using the CCR as the primary source of valid TINs would reinforce OMB’s intention that the CCR become the government’s central source of contractor data and would minimize duplicate effort among agencies in validating TINs. Therefore, we encourage OMB to pursue use of the CCR as the primary option for agencies to obtain valid TINs. We are sending copies of this report to the Ranking Minority Member, House Committee on Ways and Means; Ranking Minority Member, Subcommittee on Oversight, House Committee on Ways and Means; and the Chairman and Ranking Minority Member, Senate Committee on Finance. We will also send copies to the Commissioner of Internal Revenue, the Director of the Office of Management and Budget, the Secretary of Defense, and other interested parties. Copies of this report will also be made available to others upon request. The report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions concerning this report, please contact me at (202) 512-9110 or Ralph Block at (415) 904-2150. Key contributors to this report are listed in appendix VI. Our objectives were to determine (1) the extent to which federal agencies file required Forms 1099 MISC, take steps to ensure that information on the returns, particularly Taxpayer Identification Numbers (TIN), are valid, and initiate backup withholding if vendors provide invalid TINs; (2) recent actions the Internal Revenue Service (IRS) has taken to help improve federal agency Form 1099 MISC (Miscellaneous Income) filing compliance; and (3) whether any additional measures could further improve federal agency compliance with Form 1099 MISC filing requirements. To determine whether federal agencies annually file required Forms 1099 MISC with IRS, we requested and obtained vendor service payment information from the 14 federal departments for calendar year 2000 and 2001. We specifically asked for the vendor name, TIN, and total dollar value of all payments made by the various agencies and bureaus within these departments for services provided during calendar years 2000 and 2001. However, we were unable to verify whether they fully complied with our request, such as only providing payments for services and not for goods. We compared the information we obtained from the 14 federal departments with vendor payment information included in IRS’s Payer Master File (PMF) and Information Returns Master File (IRMF) for the same 2-year period. On the basis of our review of IRS's procedures for processing information returns and our testing of database extracts obtained from IRS's Payer Master File and Information Returns Master File, we determined that the data were sufficiently reliable to enable us to determine whether Forms 1099 MISC had been filed by the agencies and bureaus within the 14 federal departments and, if so, whether they included valid vendor TINs. We obtained vendor payment information from the following federal departments:Agriculture Commerce Defense Education Energy Health and Human Services Housing and Urban Development Interior Justice Labor State Transportation Treasury In an effort to gauge the potential result of not filing Forms 1099 MISC, we selected payment information provided to us by the agencies within 3 of the 14 federal departments for 2000 and 2001 and identified the amounts paid to individual payees that were not included on IRS’s IRMF, thus indicating that a Form 1099 MISC had not been filed. We then compared the payee information to an IRS file of nonfilers to determine whether the individual payees had filed federal income tax returns for the comparable years. To determine whether Forms 1099 MISC filed with IRS by federal agencies include valid vendor TINs, we analyzed IRS’s IRMF for calendar years 2000 and 2001. We identified the number and dollar value of Forms 1099 MISC filed by the 14 federal departments that, according to the IRMF, contained invalid TINs. Of these, we further identified the number and dollar value associated with invalid TINs that IRS was able to correct via its TIN validation program, as well as those that remained invalid because IRS was unsuccessful in correcting them. To determine whether federal agencies take steps to ensure that information on the returns, particularly TINs, is valid, by using IRS’s TIN- matching program to validate vendor TINs or initiate backup withholding on future payments to vendors that have submitted invalid TINs, we sent a survey to the 14 federal departments about their policies and practices for obtaining vendor TINs and filing Forms 1099 MISC. We asked whether they validate vendor TINs through IRS’s TIN-matching program and if not, why not. We also asked whether they initiate backup withholding if it is determined that a vendor has provided an invalid TIN. We then summarized the overall department responses. To identify recent actions IRS has taken to help improve federal agency Form 1099 MISC filing compliance, we discussed with IRS officials any actions that were either recently implemented or pending. We also tracked the progress of IRS’s pending on-line TIN-matching program, which is expected to be available to federal agencies in the latter part of 2003. To identify any additional measures that could further improve federal agency compliance with Form 1099 MISC reporting requirements, we discussed this issue with IRS and the Office of Management and Budget (OMB) officials and analyzed both recent and pending actions that would affect such compliance. We did our work at IRS and OMB headquarters in Washington, D.C., from June 2002 through September 2003 in accordance with generally accepted government auditing standards. We obtained written comments on a draft of this report from the Commissioner of Internal Revenue (see app. IV) and the Secretary of Defense (see app. V). We also obtained oral comments from representatives of the Office of Management and Budget. This appendix provides details concerning the specific number and dollar value of Forms 1099 MISC filed by each of the 14 federal departments for tax years 2000 and 2001. As table 1 shows, the Department of Defense filed the greatest number and dollar value of Forms 1099 MISC each year, while the Department of Transportation filed the least. With a few exceptions, most departments filed more Forms 1099 MISC in 2001 than in 2000. This appendix provides details concerning the specific number and percentage of Forms 1099 MISC filed by each of the 14 federal departments for tax years 2000 and 2001 that included invalid TINs when received by IRS. As table 2 shows, the Departments of Defense and Veterans Affairs filed the greatest number of Forms 1099 MISC with invalid TINs each year, while the Departments of Transportation and Education filed the least. The Departments of Transportation and Agriculture filed the greatest percentage of Forms 1099 MISC with invalid TINs each year, while the Department of Health and Human Services filed the least. In addition to those named above, Tom Bloom, Janet Eackloff, Evan Gilman, Shirley Jones, Bob McKay, and James Ungvarsky made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The Internal Revenue Service (IRS) matches information returns filed by third parties, including federal agencies, with taxpayers' income tax returns to determine whether taxpayers have filed a return and/or reported all of their income. A correct taxpayer identification number (TIN) is necessary to enable IRS to match these returns. Prior GAO reviews have shown that federal agency payment records often include invalid TINs, particularly for vendors. GAO was asked to study federal agencies' compliance with filing information returns for service payments made to vendors, IRS's efforts to improve agencies' compliance, and whether additional measures could improve their compliance. Federal agencies do not always adhere to information return reporting requirements. About $5 billion in payments to 152,000 payees made during 2000 and 2001 by agencies within three federal departments were not reported to IRS. About 8,800 of these payees had received $421 million in payments, yet had failed to file a tax return for these years. In addition, about $20 billion in payments that were reported to IRS on 170,000 information returns for 2000 and 2001 included invalid vendor TINs. This was due in part to the fact that few federal agencies use IRS's TIN-matching program, as use of this program is optional. IRS has acted to aid federal agencies in complying with annual information return filing requirements. In August 2003, IRS notified federal agencies about information returns filed for 2001 that included invalid vendor TINs and the need for agencies to withhold a portion of future payments if the vendors fail to provide a valid TIN. IRS is also in the process of making the TIN-matching program available online. IRS does not currently have a program to identify and follow up with federal agencies that fail to file required annual information returns for vendor payments. Improvements to IRS's Payer Master File, which contains general information on all payers who file information returns, would be necessary for such a program. In addition, although the Central Contractor Registration is intended for use as a central source of valid vendor information by all federal agencies, it contains some invalid TINs. Due to statutory restrictions, all vendor TINs in this database cannot currently be validated through the IRS TIN-matching program, but options exist to address this problem.
The federal government is projected to invest more than $89 billion on IT in fiscal year 2017. However, as we have previously reported, investments in federal IT too often result in failed projects that incur cost overruns and schedule slippages, while contributing little to the desired mission-related outcomes. For example: The Department of Veterans Affairs’ Scheduling Replacement Project was terminated in September 2009 after investing an estimated $127 million over 9 years. The tri-agency National Polar-orbiting Operational Environmental Satellite System was disbanded in February 2010 at the direction of the White House’s Office of Science and Technology Policy after the program invested 16 years and almost $5 billion. The Department of Homeland Security’s Secure Border Initiative Network program was ended in January 2011, after the department invested more than $1 billion to the program. The Office of Personnel Management’s Retirement Systems Modernization program was canceled in February 2011, after investing approximately $231 million on the agency’s third attempt to automate the processing of federal employee retirement claims. The Department of Veterans Affairs’ Financial and Logistics Integrated Technology Enterprise program was intended to be delivered by 2014 at a total estimated cost of $609 million, but was terminated in October 2011 due to challenges in managing the program. The Department of Defense’s Expeditionary Combat Support System was canceled in December 2012 after investing more than a billion dollars and failing to deploy within 5 years of initially obligating funds. The Farm Service Agency’s Modernize and Innovate the Delivery of Agricultural Systems program, which was to replace aging hardware and software applications that process benefits to farmers, was halted in July 2014 after investing about 10 years and at least $423 million, while only delivering about 20 percent of the functionality that was originally planned. Our past work found that these and other failed IT projects often suffered from a lack of disciplined and effective management, such as project planning, requirements definition, and program oversight and governance. In many instances, agencies had not consistently applied best practices that are critical to successfully acquiring IT. Federal IT projects have also failed due to a lack of oversight and governance. Executive-level governance and oversight across the government has often been ineffective, specifically from chief information officers (CIO). For example, we reported that some CIOs’ authority was limited in that not all CIOs had the authority to review and approve the entire agency IT portfolio. Our past work has also identified nine critical factors underlying successful major acquisitions that support the objective of improving the management of large-scale IT acquisitions across the federal government: (1) program officials actively engaging with stakeholders; (2) program staff having the necessary knowledge and skills; (3) senior department and agency executives supporting the programs; (4) end users and stakeholders being involved in the development of requirements; (5) end users participating in the testing of system functionality prior to end user acceptance testing; (6) government and contractor staff being stable and consistent; (7) program staff prioritizing requirements; (8) program officials maintaining regular communication with the prime contractor; and (9) programs receiving sufficient funding. Recognizing the importance of issues related to government-wide management of IT, FITARA was enacted in December 2014. The law was aimed at improving agencies’ acquisitions of IT and could help enable Congress to monitor agencies’ progress and hold them accountable for reducing duplication and achieving cost savings. FITARA includes specific requirements related to the acquisition of IT, such as Agency CIO authority enhancements. CIOs at covered agencies are required to (1) approve the IT budget requests of their respective agencies, (2) certify that OMB’s incremental development guidance is being adequately implemented for IT investments, (3) review and approve contracts for IT, and (4) approve the appointment of other agency employees with the title of CIO. Enhanced transparency and improved risk management. OMB and covered agencies are to make detailed information on federal IT investments publicly available and agency CIOs are to categorize their IT investments by level of risk. Additionally, in the case of major IT investments rated as high risk for 4 consecutive quarters, the law requires that the agency CIO and the investment’s program manager conduct a review aimed at identifying and addressing the causes of the risk. Expansion of training and use of IT acquisition cadres. Agencies are to update their acquisition human capital plans to address supporting the timely and effective acquisition of IT. In doing so, the law calls for agencies to consider, among other things, establishing IT acquisition cadres or developing agreements with other agencies that have such cadres. Government-wide software purchasing program. The General Services Administration is to develop a strategic sourcing initiative to enhance government-wide acquisition and management of software. In doing so, the law requires that, to the maximum extent practicable, the General Services Administration should allow for the purchase of a software license agreement that is available for use by all executive branch agencies as a single user. Maximizing the benefit of the federal strategic sourcing initiative. Federal agencies are required to compare their purchases of services and supplies to what is offered under the federal strategic sourcing initiative. OMB is also required to issue related regulations. In February 2015, we introduced a new government-wide high-risk area, Improving the Management of IT Acquisitions and Operations. This area highlights several critical IT initiatives in need of additional congressional oversight, including (1) reviews of troubled projects; (2) efforts to increase the use of incremental development; (3) efforts to provide transparency relative to the cost, schedule, and risk levels for major IT investments; (4) reviews of agencies’ operational investments; (5) data center consolidation; and (6) efforts to streamline agencies’ portfolios of IT investments. We noted that implementation of these initiatives has been inconsistent and more work remains to demonstrate progress in achieving successful IT acquisitions and operations outcomes. Further, our February 2015 high-risk report also stated that, beyond implementing FITARA, OMB and agencies needed to continue to implement our prior recommendations in order to improve their ability to effectively and efficiently invest in IT. Specifically, between fiscal years 2010 and 2015, we made 803 recommendations to OMB and federal agencies to address shortcomings in IT acquisitions and operations, including many to improve the implementation of the recent initiatives and other government-wide, cross-cutting efforts. We noted that OMB and agencies should demonstrate government-wide progress in the management of IT investments by, among other things, implementing at least 80 percent of our recommendations related to managing IT acquisitions and operations within 4 years. In February 2017, we issued an update to our high-risk series and reported that, while progress had been made in improving the management of IT acquisitions and operations, significant work still remained to be completed. For example, as of December 2016, OMB and the agencies had fully implemented 366 (or about 46 percent) of the 803 recommendations. This was a 23 percent increase compared to the percentage we reported as being fully implemented in 2015. Figure 1 summarizes the progress that OMB and the agencies have made in addressing our recommendations, as compared to the 80 percent target. In addition, in fiscal year 2016, we made 202 new recommendations, thus further reinforcing the need for OMB and agencies to address the shortcomings in IT acquisitions and operations. In addition to addressing our prior recommendations, our 2017 high-risk update also notes the importance of OMB and federal agencies continuing to expeditiously implement the requirements of FITARA. Given the magnitude of the federal government’s annual IT budget, which is projected to be more than $89 billion in fiscal year 2017, it is important that agencies leverage all available opportunities to ensure that IT investments are made in the most effective manner possible. To do so, agencies can rely on key IT workforce planning activities to facilitate the success of major acquisitions. OMB has also established several initiatives to improve the acquisition of IT, including reviews of troubled IT projects, a key transparency website, and an emphasis on incremental development. However, the implementation of these efforts has been inconsistent and more work remains to demonstrate progress in achieving successful IT acquisition outcomes. An area where agencies can improve their ability to acquire IT is workforce planning. In November 2016, we reported that IT workforce planning activities, when effectively implemented, can facilitate the success of major acquisitions. As stated earlier, ensuring program staff have the necessary knowledge and skills is a factor commonly identified as critical to the success of major investments. If agencies are to ensure that this critical success factor has been met, then IT skill gaps need to be adequately assessed and addressed through a workforce planning process. In this regard, we reported that four workforce planning steps and eight key activities can assist agencies in assessing and addressing IT knowledge and skill gaps. Specifically, these four steps are: (1) setting the strategic direction for IT workforce planning, (2) analyzing the workforce to identify skill gaps, (3) developing and implementing strategies to address IT skill gaps, and (4) monitoring and reporting progress in addressing skill gaps. Each of the four steps is supported by key activities (as summarized in table 1). However, in our November 2016 report, we determined that five agencies that we selected for in-depth analysis had not fully implemented key workforce planning steps and activities. For example, four of these agencies had not demonstrated an established IT workforce planning process. In addition, none of these agencies had fully assessed their workforce competencies and staffing needs regularly or established strategies and plans to address gaps in these areas. Figure 2 illustrates the extent to which the five selected agencies had fully, partially, or not implemented key IT workforce planning activities. The weaknesses identified were due, in part, to these agencies lacking comprehensive policies that required such activities, or failing to apply the policies to IT workforce planning. We concluded that, until these weaknesses are addressed, the five agencies risk not adequately assessing and addressing gaps in knowledge and skills that are critical to the success of major acquisitions. Accordingly, we made recommendations to each of the five selected agencies to address the weaknesses in their IT workforce planning practices that we identified. Four agencies—the Departments of Commerce, Health and Human Services, Transportation, and Treasury—agreed with our recommendations and one, the Department of Defense, partially agreed. In January 2010, the Federal CIO began leading TechStat sessions— face-to-face meetings to terminate or turn around IT investments that are failing or are not producing results. These meetings involve OMB and agency leadership and are intended to increase accountability and transparency and improve performance. OMB reported that federal agencies achieved over $3 billion in cost savings or avoidances as a result of these sessions in 2010. Subsequently, OMB empowered agency CIOs to hold their own TechStat sessions within their respective agencies. In June 2013, we reported that, while OMB and selected agencies continued to hold additional TechStats, more OMB oversight was needed to ensure that these meetings were having the appropriate impact on underperforming projects. Specifically, OMB reported conducting TechStats at 23 federal agencies covering 55 investments, 30 of which were considered medium or high risk at the time of the TechStat. However, these reviews accounted for less than 20 percent of medium- or high-risk investments government-wide. As of August 2012, there were 162 such at-risk investments across the government. Further, we reviewed four selected agencies and found they had held TechStats on 28 investments. While these reviews were generally conducted in accordance with OMB guidance, we found that areas for improvement existed. For example, these agencies did not consistently create memorandums with responsible parties and due dates for action items. We concluded that, until these agencies fully implemented OMB’s TechStat guidance, they may not be positioned to effectively manage and resolve problems on IT investments. In addition, we noted that, until OMB and agencies develop plans and schedules to review medium- and high- risk investments, the investments would likely remain at risk. Among other things, we recommended that OMB require agencies to conduct TechStats for each IT investment rated with a moderately high- or high- risk rating, unless there is a clear reason for not doing so. OMB generally agreed with this recommendation. However, when we testified on this issue slightly more than 2 years later in November 2015, we found that OMB had only conducted one TechStat review between March 2013 and October 2015. In addition, we noted that OMB had not listed any savings from TechStats in any of its required quarterly reporting to Congress since June 2012. This issue continues to be a concern and, in January 2017, the Federal CIO Council issued a report titled the State of Federal Information Technology, which noted that while early TechStats saved money and turned around underperforming investments it was unclear if OMB had performed any TechStats in recent years. To facilitate transparency across the government in acquiring and managing IT investments, OMB established a public website—the IT Dashboard—to provide detailed information on major investments at 26 agencies, including ratings of their performance against cost and schedule targets. Among other things, agencies are to submit ratings from their CIOs, which, according to OMB’s instructions, should reflect the level of risk facing an investment relative to that investment’s ability to accomplish its goals. In this regard, FITARA includes a requirement for CIOs to categorize their major IT investment risks in accordance with OMB guidance. Over the past 6 years, we have issued a series of reports about the IT Dashboard that noted both significant steps OMB has taken to enhance the oversight, transparency, and accountability of federal IT investments by creating its IT Dashboard, as well as issues with the accuracy and reliability of data. In total, we have made 47 recommendations to OMB and federal agencies to help improve the accuracy and reliability of the information on the IT Dashboard and to increase its availability. Most agencies have agreed with our recommendations. Most recently, in June 2016, we determined that 13 of the 15 agencies selected for in-depth review had not fully considered risks when rating their major investments on the IT Dashboard. Specifically, our assessments of risk for 95 investments at 15 selected agencies matched the CIO ratings posted on the Dashboard 22 times, showed more risk 60 times, and showed less risk 13 times. Figure 3 summarizes how our assessments compared to the selected investments’ CIO ratings. Aside from the inherently judgmental nature of risk ratings, we identified three factors which contributed to differences between our assessments and the CIO ratings: Forty of the 95 CIO ratings were not updated during the month we reviewed, which led to more differences between our assessments and the CIOs’ ratings. This underscores the importance of frequent rating updates, which help to ensure that the information on the Dashboard is timely and accurately reflects recent changes to investment status. Three agencies’ rating processes spanned longer than 1 month. Longer processes mean that CIO ratings are based on older data, and may not reflect the current level of investment risk. Seven agencies’ rating processes did not focus on active risks. According to OMB’s guidance, CIO ratings should reflect the CIO’s assessment of the risk and the investment’s ability to accomplish its goals. CIO ratings that do not incorporate active risks increase the chance that ratings overstate the likelihood of investment success. As a result, we concluded that the associated risk rating processes used by the 15 agencies were generally understating the level of an investment’s risk, raising the likelihood that critical federal investments in IT are not receiving the appropriate levels of oversight. To better ensure that the Dashboard ratings more accurately reflect risk, we recommended that the 15 agencies take actions to improve the quality and frequency of their CIO ratings. Twelve agencies generally agreed with or did not comment on the recommendations and three agencies disagreed, stating their CIO ratings were adequate. However, we noted that weaknesses in their processes still existed and that we continued to believe our recommendations were appropriate. OMB has emphasized the need to deliver investments in smaller parts, or increments, in order to reduce risk, deliver capabilities more quickly, and facilitate the adoption of emerging technologies. In 2010, it called for agencies’ major investments to deliver functionality every 12 months and, since 2012, every 6 months. Subsequently, FITARA codified a requirement that agency CIOs certify that IT investments are adequately implementing OMB’s incremental development guidance. In May 2014, we reported that 66 of 89 selected investments at five major agencies did not plan to deliver capabilities in 6-month cycles, and less than half of these investments planned to deliver functionality in 12-month cycles. We also reported that only one of the five agencies had complete incremental development policies. Accordingly, we recommended that OMB develop and issue clearer guidance on incremental development and that the selected agencies update and implement their associated policies. Four of the six agencies agreed with our recommendations or had no comments; the remaining two agencies partially agreed or disagreed with the recommendations. The agency that disagreed with our recommendation stated that it did not believe that its recommendation should be dependent on OMB first taking action. However, we noted that our recommendation does not require OMB to take action first and that we continued to believe our recommendation was warranted and could be implemented. Subsequently, in August 2016, we reported that agencies had not fully implemented incremental development practices for their software development projects. Specifically, we noted that, as of August 31, 2015, 22 federal agencies had reported on the IT Dashboard that 300 of 469 active software development projects (approximately 64 percent) were planning to deliver usable functionality every 6 months for fiscal year 2016, as required by OMB guidance. Regarding the remaining 169 projects (or 36 percent) that were reported as not planning to deliver functionality every 6 months, agencies provided a variety of explanations for not achieving that goal. These included project complexity, the lack of an established project release schedule, or that the project was not a software development project. Table 2 lists the total number and percent of federal software development projects for which agencies reported plans to deliver functionality every 6 months for fiscal year 2016. In conducting an in-depth review of seven selected agencies’ software development projects, we determined that 45 percent of the projects delivered functionality every 6 months for fiscal year 2015 and 55 percent planned to do so in fiscal year 2016. Agency officials reported that management and organizational challenges and project complexity and uniqueness had impacted their ability to deliver incrementally. We concluded that it was critical that agencies continue to improve their use of incremental development to deliver functionality and reduce the risk that these projects will not meet cost, schedule, and performance goals. In addition, while OMB had issued guidance requiring covered agency CIOs to certify that each major IT investment’s plan for the current year adequately implements incremental development, only three agencies (the Departments of Commerce, Homeland Security, and Transportation) had defined processes and policies intended to ensure that the department CIO certifies that major IT investments are adequately implementing incremental development. Officials from three other agencies (the Departments of Education, Health and Human Services, and the Treasury) reported that they were in the process of updating their existing incremental development policy to address certification, while the Department of Defense’s policies that address incremental development did not include information on CIO certification. We concluded that until all of the agencies we reviewed define processes and policies for the certification of the adequate use of incremental development, they will not be able to fully ensure adequate implementation of, or benefit from, incremental development practices. Accordingly, we recommended that four agencies establish a policy and process for the certification of major IT investments’ adequate use of incremental development. The Departments of Education and Health and Human Services agreed with our recommendation, while the Department of Defense disagreed and stated that its existing policies address the use of incremental development. However, we noted that the department’s policies did not comply with OMB’s guidance and that we continued to believe our recommendation was appropriate. The Department of the Treasury did not comment on the recommendation. In conclusion, with the enactment of FITARA, the federal government has an opportunity to improve the transparency and management of IT acquisitions, and to strengthen the authority of CIOs to provide needed direction and oversight. In addition to implementing FITARA, applying key IT workforce planning practices could improve the agencies’ ability to assess and address gaps in knowledge and skills that are critical to the success of major acquisitions. Further, continuing to implement key OMB initiatives can help to improve the acquisition of IT. For example, conducting additional TechStat reviews can help focus management attention on troubled projects and provide a mechanism to establish clear action items to improve project performance or terminate the investment. Additionally, improving the assessment of risks when agencies rate major investments on the IT Dashboard would likely provide greater transparency and oversight of the government’s billions of dollars in IT investments. Lastly, increasing the use of incremental development approaches could improve the likelihood that major IT investments meet cost, schedule, and performance goals. Chairmen Hurd and Meadows, Ranking Members Kelly and Connolly, and Members of the Subcommittees, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staffs have any questions about this testimony, please contact me at (202) 512-9286 or at pownerd@gao.gov. Individuals who made key contributions to this testimony are Dave Hinchman (Assistant Director), Chris Businsky, Rebecca Eyler, and Jon Ticehurst (Analyst in Charge). High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO-17-317. Washington, D.C.: February 15, 2017. IT Workforce: Key Practices Help Ensure Strong Integrated Program Teams; Selected Departments Need to Assess Skill Gaps. GAO-17-8. Washington, D.C.: November 30, 2016. Information Technology Reform: Agencies Need to Increase Their Use of Incremental Development Practices. GAO-16-469. Washington, D.C.: August 16, 2016. IT Dashboard: Agencies Need to Fully Consider Risks When Rating Their Major Investments. GAO-16-494. Washington, D.C.: June 2, 2016. High-Risk Series: An Update. GAO-15-290. Washington, D.C.: February 11, 2015. Information Technology: Agencies Need to Establish and Implement Incremental Development Policies. GAO-14-361. Washington, D.C.: May 1, 2014. IT Dashboard: Agencies Are Managing Investment Risk, but Related Ratings Need to Be More Accurate and Available. GAO-14-64. Washington, D.C.: December 12, 2013. Information Technology: Additional Executive Review Sessions Needed to Address Troubled Projects. GAO-13-524. Washington, D.C.: June 13, 2013. IT Dashboard: Opportunities Exist to Improve Transparency and Oversight of Investment Risk at Select Agencies. GAO-13-98. Washington, D.C.: October 16, 2012. IT Dashboard: Accuracy Has Improved, and Additional Efforts Are Under Way to Better Inform Decision Making. GAO-12-210. Washington, D.C.: November 7, 2011. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal government is projected to invest more than $89 billion on IT in fiscal year 2017. Historically, these investments have frequently failed, incurred cost overruns and schedule slippages, or contributed little to mission-related outcomes. Accordingly, in December 2014, IT reform legislation was enacted, aimed at improving agencies' acquisitions of IT. Further, in February 2015, GAO added improving the management of IT acquisitions and operations to its high-risk list. This statement focuses on the status of federal efforts in improving the acquisition of IT. Specifically, this statement summarizes GAO's prior work primarily published between June 2013 and February 2017 on (1) key IT workforce planning activities, (2) risk levels of major investments as reported on OMB's IT Dashboard, and (3) implementation of incremental development practices, among other issues. The Federal Information Technology Acquisition Reform Act (FITARA) was enacted in December 2014 to improve federal information technology (IT) acquisitions and can help federal agencies reduce duplication and achieve cost savings. Successful implementation of FITARA will require the Office of Management and Budget (OMB) and federal agencies to take action in a number of areas identified in the law and as previously recommended by GAO. IT workforce planning. GAO identified eight key IT workforce planning practices in November 2016 that are critical to ensuring that agencies have the knowledge and skills to successfully acquire IT, such as analyzing the workforce to identify gaps in competencies and staffing. However, GAO reported that the five selected federal agencies it reviewed had not fully implemented these practices. For example, none of these agencies had fully assessed their competency and staffing needs regularly or established strategies and plans to address gaps in these areas. These weaknesses were due, in part, to agencies lacking comprehensive policies that required these practices. Accordingly, GAO made specific recommendations to the five agencies to address the practices that were not fully implemented. Four agencies agreed and one partially agreed with GAO's recommendations. IT Dashboard. To facilitate transparency into the government's acquisition of IT, OMB's IT Dashboard provides detailed information on major investments at federal agencies, including ratings from Chief Information Officers (CIO) that should reflect the level of risk facing an investment. GAO reported in June 2016 that 13 of the 15 agencies selected for in-depth review had not fully considered risks when rating their investments on the IT Dashboard. In particular, of the 95 investments reviewed, GAO's assessments of risks matched the CIO ratings 22 times, showed more risk 60 times, and showed less risk 13 times. Several factors contributed to these differences, such as CIO ratings not being updated frequently and using outdated risk data. GAO recommended that agencies improve the quality and frequency of their ratings. Most agencies agreed with GAO's recommendations. Incremental development. An additional reform initiated by OMB has emphasized the need for federal agencies to deliver investments in smaller parts, or increments, in order to reduce risk and deliver capabilities more quickly. Specifically, since 2012, OMB has required investments to deliver functionality every 6 months. In August 2016, GAO determined that, for fiscal year 2016, 22 agencies had reported on the IT Dashboard that 64 percent of their software development projects would deliver useable functionality every 6 months. However, GAO determined that only three of seven agencies selected for in-depth review had policies regarding the CIO certifying IT investments' adequate implementation of incremental development, as required by OMB. GAO recommended, among other things, that four agencies improve their policies for CIO certification of incremental development. Most of these agencies agreed with the recommendations. Between fiscal years 2010 and 2015, GAO made 803 recommendations to OMB and federal agencies to address shortcomings in IT acquisitions and operations. The significance of these recommendations contributed to the addition of this area to GAO's high-risk list. As of December 2016, OMB and the agencies had fully implemented 366 (or about 46 percent) of the 803 recommendations. In fiscal year 2016, GAO made 202 new recommendations, thus further reinforcing the need for OMB and agencies to address the shortcomings GAO has identified.
Oceangoing cargo containers have an important role in the movement of cargo between global trading partners. Approximately 90 percent of the world’s trade is transported in cargo containers. In the United States almost half of incoming trade (by value) arrives by containers aboard ships. If terrorists smuggled a weapon of mass destruction into the nation using a cargo container and detonated such a weapon at a seaport, the incident could cause widespread death and damage to the immediate area, perhaps shut down seaports nationwide, cost the U.S. economy billions of dollars, and seriously hamper international trade. The Department of Homeland Security and CBP are responsible for addressing the threat posed by terrorist smuggling of weapons in oceangoing containers. To carry out this responsibility, CBP uses a layered security strategy. One key element of this strategy is ATS. CBP uses ATS to review documentation, including electronic manifest information submitted by the ocean carriers on all arriving shipments, to help identify containers for additional inspection. CBP requires the carriers to submit manifest information 24 hours prior to a United States-bound sea container being loaded onto a vessel in a foreign port. ATS is a complex mathematical model that uses weighted rules that assign a risk score to each arriving shipment in a container based on manifest information. As previously discussed, CBP officers use these scores to help them make decisions on the extent of documentary review or physical inspection to be conducted. ATS is an important part of other layers in the security strategy. Under its CSI program, CBP places staff at designated foreign seaports to work with foreign counterparts to identify and inspect high-risk containers for weapons of mass destruction before they are shipped to the United States. At these foreign seaports, CBP officials use ATS to help target shipments for inspection by foreign customs officials prior to departing for the United States. Approximately 73 percent of cargo containers destined for the United States originate in or go through CSI ports. ATS is also an important factor in the Customs-Trade Partnership Against Terrorism (C-TPAT) program. C-TPAT is a cooperative program linking CBP and members of the international trade community in which private companies agree to improve the security of their supply chains in return for a reduced likelihood that their containers will be inspected. Specifically, C-TPAT members receive a range of benefits, some of which could change the ATS risk characterization of their shipments, thereby reducing the probability of extensive documentary and physical inspection. CBP does not yet have key controls in place to provide reasonable assurance that ATS is effective at targeting oceangoing cargo containers with the highest risk of containing smuggled weapons of mass destruction. To address this shortcoming, CBP is (1) developing and implementing performance metrics to measure the effectives of ATS, (2) planning to compare the results of randomly conducted inspections with the results of its ATS inspections, (3) developing and implementing a simulation and testing environment, and (4) addressing recommendations contained in a 2005 peer review. To date, none of these control activities have been fully completed or implemented. Thus, CBP does not yet have key internal controls in place to be reasonably certain that ATS is providing the best available information to allocate resources for targeting and inspecting containers that are the highest risk and thus not overlook inspecting containers that pose a high threat to the nation. CBP does not yet have performance measures in place to help it determine the effectiveness of ATS at targeting oceangoing cargo containers with the highest risk of smuggled weapons of mass destruction. The Comptroller General’s internal control standards include the establishment and review of performance measures as one example of a control activity to help an entity ensure it is achieving effective results. In July 2005, CBP contracted with a consulting firm to develop such performance metrics. CBP officials and personnel from this consulting firm told us that the firm’s personnel analyzed shipment information in ATS over a 2-year period to obtain additional insights into ATS’s performance and to determine whether ATS is more effective at targeting cargo containers for terrorism related risk than a random sampling inspection approach. CBP officials told us that the consulting firm’s personnel prepared a draft of the results of their analyses and that, as of March 21, 2006, CBP officials are reviewing these analyses. They also said that the consulting firm’s personnel are documenting the methodology for their analyses and related performance measures that CBP can use in the future. CBP officials expect to receive this methodology and the performance measures in April 2006, and told us that they expect to begin using the measures in June 2006. CBP officials also told us that they initially planned to have performance measures developed by August 31, 2005, but that this process has taken longer than expected because of delays in (1) obtaining security clearances for the consulting firm’s personnel, (2) obtaining workspace for the firm’s staff, and (3) arranging for the appropriate levels of access to CBP’s information systems. Currently, CBP is not using the results of its random sampling program to assess the effectiveness of ATS. As part of its Compliance Measurement Program, CBP plans to randomly select 30,000 shipments based on entry information submitted by the trade community and examine those shipments to ensure compliance with supply chain security during fiscal year 2006. At this time, CBP is unable to compare the examination results from its random sampling program with its ATS inspection results, as we recommended in our 2004, report because CBP does not yet have an integrated, comprehensive system in place to compare multiple sets of data—like results of random inspections with results of routine ATS inspections that were triggered by ATS scores and other operational circumstances. Such a comparison would allow examination of if and why the outcomes of ATS’s weighted rule sets are not consistent with the expected outcomes possible in the universe of cargo containers, based on sample projections. Furthermore, the Comptroller General’s standards for internal control state that information should be recorded and communicated to management and others within the entity who need it in a form that enables them to carry out their responsibilities. Currently, CBP does not conduct simulated events (e.g., covert tests and computer-generated simulations)—a key control activity—to test and validate the effectiveness of ATS in targeting oceangoing cargo containers with the highest risk of containing smuggled weapons of mass destruction and has not yet implemented a dedicated simulation and testing environment. Without testing and validation, CBP lacks a vital mechanism for evaluating ATS’s ability to identify high-risk containers. In July 2005, CBP contracted with a consulting firm to obtain assistance in the development of a computer-generated simulation and testing environment. CBP officials report that they have the simulation environment infrastructure in place and have processed mock manifest data to simulate cargo linked to terrorism in the new environment. CBP is currently reviewing the results of this test. Further, CBP officials told us that the consulting firm is continuing to work with CBP to develop system requirements so that officers can effectively use the simulation environment. CBP expects to receive the consulting firm’s final input for the simulation and testing environment by June 2006. CBP officials said that they cannot estimate when this simulation and testing environment will be fully operational until CBP receives the consulting firm’s final product. As with the development of performance measures, CBP officials also told us that this process has taken longer than expected because of delays in (1) obtaining security clearances for the consulting firm’s personnel, (2) obtaining workspace for the firm’s staff, and (3) arranging for the appropriate levels of access to CBP’s information systems. As we reported in 2004, terrorism experts suggested that testing ATS by covertly simulating a realistic event using probable methods of attack would give CBP an opportunity to examine how ATS would perform in an actual terrorist situation. CBP officials told us that although they are considering implementing this kind of practice, they do not currently have a program in place to conduct such tests. The Director of CBP’s Management Inspections and Integrity Assurance office told us that in mid-April 2006, his office will be presenting a proposal to the Acting Commissioner and other senior management to request initiation of a program to conduct testing of the CSI program that will include testing ATS to help ensure that it is appropriately targeting the highest-risk cargo in the CSI program. In response to our 2004 recommendation that CBP initiate an external peer review of ATS, CBP contracted with a consulting firm to evaluate CBP’s targeting methodology and recommend improvements. Specifically, the contractor identified strengths of the CBP targeting methodology and compared ATS with other targeting methodologies. However, the peer review did not evaluate the overall effectiveness of ATS because CBP did not have the systems in place to allow the contractor to do so. The contractor’s final report, issued in April 2005, identified many strengths in the ATS targeting methodology, such as a very capable and highly dedicated team and the application of a layered approach to targeting. It also made several recommendations to improve the targeting methodology that included control activities, such as (1) the development of performance measures, (2) the development of a simulation and testing environment, (3) the development and implementation of a structured plan for continual rules enhancement, and (4) an evaluation and determination of the effectiveness of the ATS targeting rules, several of which reinforced the recommendations we made in our 2004 report. CBP issued a detailed plan, which projected delivery dates, for responding to the recommendations made in the contractor’s final report. However, about half of these dates have not been met. For example, CBP projected that it would have its testing and simulation environment in place by September 30, 2005. Although CBP has been working on this effort, the environment has not yet been implemented. As previously discussed, CBP officials said that they cannot provide a current estimate of when this simulation and testing environment will be fully operational. CBP strives to refine ATS to include intelligence information it acquires and feedback it receives from its targeting officers at the seaports, but it is not able to systematically adjust ATS for inspection results. CBP does not have a comprehensive, integrated system in place to report details on security inspections nationwide that will allow management to analyze those inspections and refine ATS. CBP officials said that they are developing a system that will allow them to do so but did not know when it will be fully operational. CBP officials cautioned that because an inspection does not identify any contraband or a weapon of mass destruction or its components, it may not necessarily indicate that a particular rule is not operating as intended. They noted that terrorist incidents may happen infrequently, and the rule therefore might operate only when weapons, materials, or other dangerous contraband is actually shipped. However, without analyzing and using security inspection results to adjust ATS, CBP is limited in refining ATS, a fact that could hinder the effectiveness of CBP’s overall targeting strategy. CBP adjusts ATS’s rules and weights for targeting cargo containers for inspection in response to intelligence received on an ongoing basis. CBP’s Office of Intelligence (OINT) is responsible for acquiring, reviewing, analyzing, and disseminating intelligence. OINT officials told us they receive information from the intelligence community, which includes federal agencies such as the Central Intelligence Agency and the Federal Bureau of Investigation. According to OINT officials, OINT disseminates information to CBP’s offices at the seaports to, among other things, support these offices’ targeting efforts related to cargo containers. For example, the targeting officers may use information provided by OINT to search ATS for information about shipments and containers. OINT officials said they also disseminate information to CBP’s senior management to inform them about risks associated with cargo containers. CBP uses intelligence information to refine its targeting of cargo containers for inspection by incorporating the intelligence information into ATS to readily identify containers whose manifest information may match or be similar to data contained in the intelligence information. CBP documentation and our observations showed that CBP headquarters personnel incorporate intelligence information into ATS by adjusting ATS’s existing rules and weights and creating new rules and weights that result in a higher risk score being assigned to a container whose manifest information may match or be similar to data contained in the intelligence information. CBP officers can also conduct queries or create lookouts in ATS that will search all manifest data in the system to identify those containers whose manifest information may match or be similar to data contained in the intelligence information. Once ATS identifies these containers, CBP officers are to then designate these containers for inspection. When CBP receives credible intelligence information that requires immediate action, CBP officials also report that they can initiate a special operation to address specific concerns identified in the intelligence data. CBP officials at the six seaports we visited reported that they sometimes receive intelligence information from local sources such as state and local law enforcement. Officials at five of these seaports reported that they will use such information to help them make decisions regarding targeting efforts. Additionally, officials at five of the six seaports we visited said that if the information they receive has national implications, they will notify CBP headquarters personnel, who will make a determination regarding potential adjustments to ATS. In the late summer of 2005, CBP headquarters initiated a process to formally track its targeting officers’ suggestions to enhance ATS for targeting cargo containers for inspection. Targeting officers at all six seaports we have visited are aware of the process for providing suggestions to CBP headquarters. According to documentation maintained by headquarters, CBP officers at the seaports have provided few suggestions to date. CBP headquarters officials said that although they have received few suggestions for modifying ATS, they do not believe this is an indication of ATS’s effectiveness. These officials stated that overall the feedback they have received from CBP targeting officers at the seaports related to the operation and usefulness of ATS has been positive. We reviewed the report CBP uses to track these suggestions and found that since it was established, CBP headquarters has received 20 suggestions for enhancing the ATS component responsible for targeting oceangoing cargo containers for inspection. Some of these suggestions relate to modifying ATS’s rules, while others focused on other aspects of ATS such as enhancing the organization and presentation of ATS screens by changing the size of an icon and the fonts or text used. CBP is not using inspection results to systematically adjust ATS for targeting cargo containers for inspection because CBP does not yet have a comprehensive, integrated system in place that can report sufficient details for analyzing inspection results. CBP officials said that although they can analyze inspection results on a case-by-case basis to identify opportunities to refine ATS, such as when an inspection results in a seizure of some type of contraband, they currently do not have a reporting mechanism in place that will allow them to view inspection results nationwide to identify patterns for systematically adjusting ATS. CBP is developing the Cargo Enforcement Reporting Tracking System (CERTS) to document, among other things, all cargo examinations so that documentation substantiating the examinations will be available for analysis by management to adjust ATS. CBP officials said they will begin testing CERTS in the spring of 2006. CBP officials told us that once testing of CERTS is complete, they will be in a better position to estimate when CERTS can be fully implemented. CBP officials cautioned that because an inspection does not identify any contraband or a weapon of mass destruction or its components, it may not necessarily indicate that a particular rule is not operating as intended. They noted that terrorist incidents may happen infrequently and the rule therefore might operate only when weapons, materials, or other dangerous contraband is actually shipped. However, without using inspection results to adjust ATS, CBP may not be targeting and inspecting containers with the highest risk of containing smuggled weapons of mass destruction. CBP has implemented a testing and certification process for its officers who complete the Sea Cargo Targeting Course that should provide better assurance of effective targeting practices. CBP has also made a good faith effort to address longshoremen’s safety concerns regarding radiation emitted by nonintrusive inspection equipment. Nevertheless, it has not been able to persuade one longshoremen’s union to permit changes in the procedure for staging containers to increase inspection efficiency. In our 2004 report, we recommended that CBP establish a testing and certification process for CBP staff who complete the national targeting training to provide reasonable assurance that they have sufficient expertise to perform targeting work. CBP has implemented such a testing and certification process. CBP conducted two evaluations that assessed its targeting training program—a job performance assessment and a job task analysis. With the results of these evaluations, CBP concluded that a certification component should be added to the training program and the Sea Cargo Targeting Training course content should remain unchanged. CBP officials then updated the course materials to encompass the inclusion of the certification component. In October 2004, CBP began certifying officers who successfully completed the Sea Cargo Targeting Training course. Since the establishment of the testing and certification component for the Sea Cargo Targeting Training course, CBP data indicate that it has trained and certified 278 of its officers responsible for targeting cargo as of March 24, 2006. While CBP has conducted a job performance assessment prior to the incorporation of a certification program for Sea Cargo Targeting Training, it has not yet formally assessed the impact that revised training and certification has had on officers’ targeting of oceangoing cargo containers. However, a CBP official said that CBP has recently initiated planning efforts to begin such an evaluation and expects to complete the evaluation in May 2006. Nevertheless, supervisory officers from five of the six CBP offices at the seaports we visited said that the mandatory training and certification program has been beneficial. These supervisory officers told us that the training and certification improves the confidence of targeters, provides the ability for officers to improve their targeting productivity, and provides an opportunity for officers to gain a broader perspective into the targeting environment by examining passenger and outbound targeting. In our 2004 report, we discussed concerns that longshoremen had regarding the safety of driving cargo containers through the gamma ray imaging system, one type of nonintrusive inspection equipment used to examine containers to detect potential contraband or weapons of mass destruction. Because this equipment emits radiation as it takes images of the inside of cargo containers, some longshoremen expressed concerns about the health effects of this radiation. As a result of these safety concerns, the longshoremen’s union representing West Coast longshoremen established a policy that prevents its members from driving containers through the gamma ray imaging system. In response, CBP altered its procedures at ports affected by this policy. For example, at some West Coast ports, CBP allows longshoremen to stage cargo containers away from the dock, in rows at port terminals, so that CBP officers can then drive the gamma ray imaging system over a group of containers. However, this procedure can be space-intensive and time-consuming compared to the procedure utilized at East and Gulf Coast ports, whereby the gamma ray imaging system machinery is operated by a CBP officer and parked in place while longshoremen drive the cargo containers through the machinery. At other West Coast ports, the longshoremen get out of the trucks after transporting the cargo containers so that CBP officials can drive the gamma ray imaging system cargo over the container. This is also time-consuming compared to the procedure utilized at the East and Gulf Coast ports. In response to our recommendation that CBP work with longshoremen to address their safety concerns, CBP engaged in two efforts: (1) establishing CBP’s radiation threshold in accordance with the Nuclear Regulatory Commission’s (NRC) federal guidelines for public radiation exposure and advertising this threshold to longshoremen through the unions, and (2) working with longshoremen’s unions and other maritime organizations to develop public radiation tests on nonintrusive inspection equipment. Officials from the West Coast union that prohibits its members from driving through the gamma ray imaging system told us that the union is satisfied with CBP efforts to operate the gamma ray imaging system in an alternative format, to comply with the union’s policy of receiving no amount of man-made radiation. Despite CBP efforts to assure this union that the amount of radiation emitted by the gamma ray imaging system is within safe levels, a union representative told us that CBP will not convince the union to change its policy unless it eliminates radiation emission from inspection equipment. - - - - - In closing, ATS is an integral part of CBP’s layered security strategy. A well-functioning ATS is crucial to the effective screening of cargo containers at domestic and CSI foreign ports, as well as cargo shipped by the trade community participating in C-TPAT. While CBP is working to make improvements to ATS, our ongoing work indicates that it is not yet in a position to gauge the effectiveness of ATS. We are continuing to review CBP’s plans and actions to improve ATS and will report to this subcommittee and the other requesters later this year. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. For further information about this testimony, please contact me at 202-512-8777 or at stanar@gao.gov. Debra Sebastian, Assistant Director; Chan-My J. Battcher; Lisa L. Berardi; Wayne A. Ekblad; and Jessica A. Evans made key contributions to this report. Additional assistance was provided by Frances Cook, Kathryn E. Godfrey, Nancy A. Hess, Arthur L. James, Jr., Stanley J. Kostyla, and Vanessa R. Taylor. To address each of our objectives, we met with U.S. Customs and Border Protection (CBP) officials in headquarters and six seaports including, Baltimore, Charleston, Los Angeles-Long Beach, Miami, New York- Newark, and Savannah. These seaports were selected based on the number of cargo containers arriving at the seaport and their geographic dispersion as reported by the U.S. Department of Transportation. At these locations, we also observed targeting and inspection operations. Because we did not select a random, probability sample of ports to visit, the results from these visits cannot be generalized to ports nationwide. We also spoke with CBP’s contractor responsible for conducting CBP’s peer review and longshoremen’s union representatives. To evaluate how CBP provides assurance that the Automated Targeting System (ATS) targets the highest-risk oceangoing cargo containers for inspection, we reviewed CBP documentation and prior GAO work on performance measures. Additionally, we reviewed CBP’s peer review report. To gain an understanding of CBP’s random sampling program, we met with CBP officials responsible for this program and reviewed and analyzed CBP documentation, including procedures for examining the randomly selected shipments and documenting the results of the inspections completed for those shipments. We did not independently validate the reliability of CBP’s targeting results. To assess how CBP adjusts ATS to respond to findings that occur during the course of its operational activities, we met with CBP officials responsible for gathering and disseminating intelligence and for incorporating intelligence into CBP’s targeting operations. Further, we reviewed CBP policies and procedures on intelligence gathering and disseminating as well as intelligence received and resulting changes to ATS rules and weights. We did not assess the quality of intelligence received or the appropriateness of adjusted rules and weights. To determine how targeting officers’ feedback and inspection results are used to adjust ATS rules and weights, we met with CBP officials responsible for collecting and maintaining data on suggestions provided by targeting officers and reviewed CBP data on the suggestions received over a 7 month period. Regarding inspection results, we reviewed CBP’s policies and procedures for documenting inspection results. Additionally, we reviewed CBP’s manuals identifying the specific details of an inspection completed and observed officers entering inspection results into the ATS findings module during our site visits. Further, during these visits, we discussed how CBP offices at the seaports may use inspection results to enhance their targeting efforts. Last, we met with CBP officials and reviewed CBP documentation on its current and planned findings module. To determine the status of recommendations from GAO’s February 2004 report to (1) establish a testing and certification process for CBP staff who complete the national targeting training to provide assurance that they have sufficient expertise to perform targeting work and (2) work with longshoremen’s unions to address fully their safety concerns so that the noninstrusive inspection equipment can be used to conduct inspections efficiently and safely, we reviewed and analyzed data on the number of officers trained and certified in sea cargo targeting. We also reviewed CBP’s Sea Cargo Training Manual as well as CBP evaluations assessing the quality of its Sea Cargo Training course. We did not assess the quality of this training. Regarding longshoremen’s union concerns, we reviewed scientific literature related to radiation safety and the Nuclear Regulatory Commission guidelines on radiation levels. We also spoke with longshoremen’s representatives to discuss whether CBP had addressed their concerns since we issued our 2004 report. Last, we also met with CBP’s Radiation Safety Officer to gain a further understanding of the potential risks associated with CBP’s inspection equipment and actions he took to address longshoremen’s concerns. We did not assess the appropriateness of radiation safety levels used by CBP. We conducted our work from October 2005 through March 2006 in accordance with generally accepted government auditing standards. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
U.S. Customs and Border Protection's (CBP) Automated Targeting System (ATS)--a computerized model that CBP officers use as a decision support tool to help them target oceangoing cargo containers for inspection--is part of CBP's layered approach to securing oceangoing cargo. GAO reported in February 2004 on challenges CBP faced in targeting oceangoing cargo containers for inspection and testified before Congress in March 2004 about the findings in that report. The report and testimony outlined recommendations aimed at (1) better incorporating recognized modeling practices into CBP's targeting strategy, (2) periodically adjusting the targeting strategy to respond to findings that occur during the course of its operation, and (3) improving implementation of the targeting strategy. This statement for the record discusses preliminary observations from GAO's ongoing work related to ATS and GAO's 2004 recommendations addressing the following questions: (1) What controls does CBP have in place to provide reasonable assurance that ATS is effective at targeting oceangoing cargo containers with the highest risk of smuggled weapons of mass destruction? (2) How does CBP systematically analyze security inspection results and incorporate them into ATS? and (3) What steps has CBP taken to better implement the rest of its targeting strategy at the seaports? CBP has not yet put key controls in place to provide reasonable assurance that ATS is effective at targeting oceangoing cargo containers with the highest risk of containing smuggled weapons of mass destruction. To provide assurance that ATS targets the highest-risk cargo containers as intended, CBP is (1) working to develop and implement performance measures related to the targeting of cargo containers, (2) planning to compare the results of its random inspections with its ATS inspection results, (3) working to develop and implement a testing and simulation environment, and (4) addressing recommendations contained in a 2005 peer review of ATS. CBP expects to begin using performance measures in June 2006 and enter the final phase of software development for its testing and simulation environment at the same time. However, to date, none of these four initiatives has been fully implemented. Thus, CBP does not yet have key internal controls in place to be reasonably confident that ATS is providing the best information to allocate resources for targeting and inspecting containers that are the highest risk and not overlook inspecting containers that pose a threat to the nation. CBP does not yet have a comprehensive, integrated system in place to analyze security inspection results and incorporate them into ATS. CBP currently adjusts ATS based on intelligence information it receives and has initiated a process to track suggestions submitted by CBP targeting officers at the seaports for modifying ATS. However, CBP has not yet implemented plans to refine ATS based on findings from routine security inspections. Without a more comprehensive feedback system, CBP is limited in refining ATS, a fact that could hinder the overall effectiveness of the targeting strategy. CBP has taken steps to improve implementation of the targeting strategy at the seaports. It has implemented a testing and certification process for its officers who complete the Sea Cargo Targeting Course that should provide better assurance of effective targeting practices. CBP has also made a good faith effort to address longshoremen's safety concerns regarding radiation emitted by nonintrusive inspection equipment by taking actions such as working with longshoremen's unions and other maritime organization to develop public radiation tests on the nonintrusive inspection equipment. Nevertheless, CBP has not been able to persuade one longshoremen's union to permit changes in the procedure for staging containers to increase inspection efficiency at some West Coast seaports where the union's members work.
Air service in the United States is highly concentrated, with 88 percent of all passenger boardings at the 62 large- or medium-hub airports (see fig. 1). Small airports refer to small-hub, nonhub, and commercial-service nonprimary airports, each with less than 0.25 percent of all annual passenger boardings, or less than 1.8 million total boardings in 2012. Many small communities across the United States have access to the more than 450 small airports with scheduled passenger service, provided mostly by regional airlines that are under contract with mainline network airlines, like Delta Air Lines or United Airlines. The airport categories in figure 1 also determine the allocation of Airport Improvement Program (AIP) grants for airport capital improvements. FAA awarded nearly $3 billion in grants to all airports in fiscal year 2013, for safety, capacity, and environmental capital improvements. The grants offset the fees that airports charge users, so they are critical for small airports hoping to retain or attract airport users. For example, any airport with at least 10,000 passengers is assured at least $1 million in annual grant funding. The EAS program has historically provided the most direct support to small community air service. Anticipating that airlines would focus their resources on generally more profitable, high-density routes, Congress established the EAS program as part of the Airline Deregulation Act of 1978. Under the EAS program, if an airline cannot provide air service to eligible communities without incurring a loss, DOT provides an airline a subsidy to serve those communities. The program was initially enacted for 10 years, it was then extended for another 10 years, and in 1996, the 10-year time limit was removed. Congress has, over time, revised eligibility requirements, such as maximum subsidy amounts per passenger, and operating requirements, such as providing service with two-engine, two-pilot planes. The program now provides subsidies to airlines to serve small airports that are (1) at least 70 driving miles from the nearest medium- or large-hub airport, or (2) requires a per-passenger EAS subsidy less than $200 unless such point is greater than 210 miles from the nearest medium- or large-hub airport. The amount of subsidies varies by location. Operating airlines receiving the subsidies must provide direct service to a nearby medium- or large-hub airport so that passengers can connect to the national air transportation network. Our discussion of EAS in this testimony does not include communities in Alaska receiving EAS-subsidized air service since the requirements for communities in Alaska are different and are not representative of the program in the rest of the country. Congress also established SCASDP as a pilot program in 2000 in the Wendell H. Ford Aviation Investment and Reform Act for the 21st Century (AIR-21), to help small communities enhance their air service. AIR-21 authorized the program for fiscal years 2002 and 2003, and subsequent legislation reauthorized the program through fiscal year 2008 and eliminated the “pilot” status of the program. Further, the FAA Modernization and Reform Act of 2012 reauthorized funding for SCASDP through fiscal year 2015. The law establishing SCASDP allows DOT considerable flexibility in implementing the program and selecting projects to be funded. Grant funds can be used to cover various projects that can be reasonably related to improving air service to the community, such as any new advertising or promotional activities, or for studies to improve air service and traffic. The law defines basic eligibility criteria and statutory priority factors, but meeting a given number of priority factors does not automatically mean DOT will select a project. SCASDP grants may be made to single communities or a consortium of communities, although no more than 40 grants may be awarded in a given year and no more than four grants each year may be given in the same state. Air service to small airports as measured by the number of flights and seats available has mostly declined since 2007, but so has service to airports of all sizes. Small airports generally serve small communities. As figure 2 shows, medium-hub, small-hub, and nonhub airports saw the largest net declines proportionally in flights and available seats since 2007, and the largest airports experienced the smallest declines. The smallest airports—commercial service nonprimary airports—experienced a slight increase in flights but a decline in available seats. Further, according to a recent Massachusetts Institute of Technology (MIT) study, 23 airports in small communities lost all service between 2007 and 2012. Airports receiving EAS-subsidized air service saw about a 20- percent increase in flights and about an 8-percent increase in available seats since 2007 as some regional airlines serving EAS communities switched to smaller aircraft. The reduced capacity for airline service in the United States since 2007 is attributable to a combination of factors, including higher costs, industry consolidation, and the last recession, which reduced demand. These and other factors also have had an effect on air service for small communities. First, the price of jet fuel more than quadrupled from 2002 through 2012 in nominal terms and endured a temporary spike where the price doubled over the 2007–2008 period. As a result of increased fuel prices, fuel costs have grown to become airlines’ single largest expense at nearly 30 percent of airline operating costs in 2012. According to a study by MIT, regional aircraft—those mostly used to provide air service to small communities with between 19 and 100 seats—are 40 to 60 percent less fuel efficient than the aircraft used by their larger, mainline counterparts—those with more than 100 seats. According to the study, fuel efficiency differences can be explained largely by differences in aircraft operations, not technology, as the operating costs per passenger for regional aircraft are higher than mainline aircraft because they operate at lower load factors and are flown fewer miles over which to spread fixed costs. Second, many small communities have lost population over the last 30 years. In previous work, we have found that population movement has decreased demand for air service to small communities. Geographic areas, especially in the Midwest and Great Plains states, lost population between 1980 and 2010, as illustrated in figure 3 below. As a result, certain areas of the country are less densely populated than they were 35 years ago when Congress initiated the EAS program. For small communities located close to larger cities and larger airports, a lack of local demand can be exacerbated by passengers choosing to drive to airports in larger cities to access better service and lower fares. The effect of industry consolidation on the level of service to small communities is reflected in “capacity purchase agreements”— agreements between mainline airlines and their regional partners. Under these agreements, a mainline airline pays the regional airline contractually agreed-upon fees for operating certain flight schedules. In recent years, according to a 2013 MIT study, mainline airlines have shifted a larger percentage of their small community service to regional airlines. However, according to another 2013 MIT study, these mainline airlines have been reducing the total amount of capacity for which they contract by eliminating previous point-to-point service between nearby smaller airports, thus, reducing the level and frequency of service provided. Two federal programs continue to support air service to small communities but also face some challenges. EAS provides subsidies to operating airlines that provide air service to eligible communities in order to maintain the service and SCASDP provides competitive grants to small communities to attract and support local air service. Subsidies provided to airlines serving EAS airports continue to increase. In 2009, we found that EAS subsidies had increased over time. Specifically, the average annual subsidy that DOT provided for EAS service per community for U.S. states, excluding Alaska, almost doubled from $1 million in 2002 to $1.9 million in 2013. In addition, the appropriations Congress made available to EAS increased from about $102 million in fiscal year 2003 to about $232 million in fiscal year 2013 (see table 1 below). According to DOT, the appropriation for the EAS program for fiscal year 2014 is $246 million. to serve those communities. However, we have found that aircraft serving airports that provide EAS service were far less full than aircraft serving airports that did not receive such assistance. In 2009, we found that planes serving airports in 2008 with EAS service were only about 37 percent full versus an industry average of about 80 percent. This was due, in part, to EAS subsidized service not having the destinations, frequency, or low fares that passengers prefer. Further, according to DOT officials, the population around some of the very small airports is too low to result in very high passenger loads. Since then, the load factor for these flights—the percentage of available seats filled by paying passengers—increased somewhat and was roughly 49 percent versus the industry average of 83 percent in 2013. This may be due, in part, to more regional airlines serving these EAS airports with smaller aircraft, as a result of changes in the EAS program that we recommended in 2009. The number of EAS communities being served by airlines with aircraft smaller than 15-seats doubled from 2009 through 2013. In 2009, 16 EAS communities were served using 9-seaters, but 32 EAS communities were served with this aircraft in 2013. Great Lakes is one of the few remaining regional airlines that flies 19-seat turboprops, while other small regional airlines such as Cape Air, SeaPort, and Air Choice One—fly smaller 9- seat aircraft not subject to some FAA rules for operating scheduled service flights.engine turboprop aircraft.) Small-hub and smaller airports are eligible for SCASDP grants provided the airport is not receiving sufficient air service or had unreasonably high airfares. Congress has provided funding for SCASDP since fiscal year 2002—ranging from a high of $20 million for fiscal years 2002 through 2005 to a low of $6 million in fiscal years 2010 through 2013. In fiscal year 2013, DOT awarded 25 grants totaling almost $11.4 million to airports in 22 states (see table 2). While funding for SCASDP is significantly less than funding for the EAS program, some small community airports depend on SCASDP grant awards as a means to stimulate economic development and attract business to the area surrounding the airport through enhanced air service. According to DOT, the appropriation for SCASDP for fiscal year 2014 is $5 million. We and others who have examined SCASDP have observed that the grant program has had limited effectiveness in helping small communities retain air service. In 2005, we found that initial SCASDP projects achieved mixed results.reported air service improvements were self-sustaining after their grant had been completed. At that time, we recommended that DOT evaluate the program again before the program was reauthorized. Specifically, about half of the airports that In response to our recommendation, the DOT Assistant Secretary for Aviation and International Affairs requested the DOT Office of Inspector General (OIG) to review the program’s effectiveness in improving air service to small communities. The review included 40 grants awarded between 2002 and 2006 (excluding feasibility studies) that had been closed for 12 months or more as of March 31, 2007, and determined whether the projects could sustain themselves without continued federal financial support. The OIG found that 70 percent of the grants in the review failed to fully achieve their objectives; specifically, 50 percent of the grants were unable to achieve any of their articulated grant objectives or were unable to sustain grant benefits beyond the grant completion and 20 percent were either partially able to obtain or achieve all of their grant objectives or were voluntarily terminated. The remaining 30 percent of the grants were successful in achieving their grant objectives and sustaining the resulting benefits for at least 12 months. The OIG made recommendations to improve the grant award process by (1) giving priority to communities with better developed grant applications, (2) requiring communities requesting non-marketing grants to use a part of the funding awarded to them to implement a marketing program, and (3) evaluating the impact of the “same project limitation” on program effectiveness and seek legislative changes, if necessary. According to the OIG’s report, DOT concurred with each of the recommendations and took the appropriate actions to implement them. Most recently, an academic study conducted by an MIT researcher evaluated 115 SCASDP grants from 2006 through 2011 and found that less than 40 percent of the grants met their primary objectives. On the other hand, SCASDP grants have been used to fund some successful projects. We found in 2005 and 2007 that SCASDP grantees pursued a variety of goals and strategies for supporting air service, and some of the grants resulted in successfully meeting their intended purposes.variety of project goals and strategies to improve air service to their community, including (1) adding flights, airlines, and destinations; (2) lowering fares; (3) upgrading the aircraft serving the community; (4) obtaining better data for planning and marketing air service; (5) increasing enplanements; and (6) curbing the loss of passengers to other airports. For example, our 2005 report found that 19 of the 23 completed grants resulted in some kind of improvement in service, either in terms of an added carrier, destination, flights, or change in the type of aircraft. We found these successes include grantees that identified a In 2007, we also found that a review of 59 grantees’ final reports for completed projects indicated that 48 of these increased enplanements as a result of their SCASDP grant. In addition, the 2008 DOT OIG report found that grants targeting the introduction of new service rather than expanding existing service were more successful and noted that grants targeting existing service may be less likely to succeed because mature markets may provide less of a growth opportunity than well-selected new markets or may reflect attempts by communities to resuscitate a failing service. Lastly, the recent MIT study highlighted three communities— Appleton, Wisconsin; Bozeman, Montana; and Manhattan, Kansas—that were able to effectively use the grants to expand service in their communities. In addition, DOT program officials we interviewed highlighted other benefits that have resulted from SCASDP grants that they said extend beyond the completion dates of the grants. For example, the officials stated that one recipient of a 2011 grant recently reported that simply obtaining the federal grant allowed the community to obtain a line of credit and prove to an airline that the grantee was able to support sustained and profitable service, even though the federal grant funds were not expended. In another example, the officials stated that one recipient of a 2002 grant reported in 2011 that while unable to establish air service prior to receiving its grant, the grant enabled the community’s airport to establish and sustain air service to the area and has resulted in substantial economic benefits for the community. In addition to the federal programs previously discussed, other legislative and regulatory policies could affect the provision of air service to small communities. Perimeter rules. Airlines operating out of Reagan National, LaGuardia, and Dallas Love Field Airports are restricted in the distance that they can travel. The purposes of these rules vary but are intended, in part, to help encourage air service to smaller communities closer to the airport. However, the restrictions at Dallas Love Field will end later this year, and the number of exemptions to the perimeter rule at Reagan National has increased. Safety regulations. A new federal law that increased the qualification requirements for pilots to be hired at U.S. airlines has caused some concerns related to a potential future shortage of qualified pilots. In July 2013, FAA, as required by law, issued a new pilot qualification rule that increased the requirements for first officers who can fly for U.S. passenger and cargo airlines and requires that first officers now hold an airline transport pilot certificate, just as captains must hold, requiring, among other things, a minimum of 1,500 hours of total time as a pilot. Regional airlines—most likely to provide air service to small communities—have been disproportionally affected by the new rule because, prior to the new rule, more of their pilots did not meet the new minimum qualifications compared to their larger, mainline airline counterparts. Earlier this year, we found that 11 of the 12 regional airlines that were interviewed reported difficulties finding sufficient numbers of qualified pilots over the past year. Furthermore, five of these regional airlines reported to us that they were limiting service to some smaller communities because they did not have pilots available to provide that service. For instance, Great Lakes Airlines recently canceled service to ten small communities reportedly due to a lack of available pilots. Similarly, Silver Airways provided DOT with the required notice of its intent to discontinue scheduled service to five small communities reportedly for the same reason. However, given that the congressional mandate to increase pilot qualifications for airline pilots only recently went into effect, some market adjustments are to be expected, and such adjustments could continue to affect air service in smaller community markets. In July 2009, we concluded that a multimodal approach—one that relies on for example, bus service to larger airports or air taxi service to connect communities—is an alternative to providing scheduled air-service connectivity to small communities. For some communities that receive EAS subsidies—for example, those that have limited demand for the service due to proximity to other airports or limited population—other transportation modes might be more cost effective and practical than these subsidies. This approach may be of use to small communities that have not been able to generate sufficient demand to justify the costs for provision of air service, resulting in rising per-passenger subsidies. When potentially cost-effective alternatives, such as bus service to other airports, are not used, the costs of subsidies may be higher than necessary to link these communities to the nation’s passenger aviation system. In 2009, we recommended that DOT assess whether other forms of air service or other modes of transportation might better serve some communities and at less cost. While DOT did not conduct such an assessment, the department took action to implement the options we identified in the report and achieved the intent of our recommendation. Further, the Future of Aviation Advisory Committee—a committee that provides information, advice, and recommendations to the Secretary of Transportation on U.S. aviation industry competitiveness and capability to address evolving transportation needs—recommended in 2011 that a task force be established to examine the EAS program and identify rural multimodal service opportunities for EAS-eligible communities, among other things. Although no provisions have been enacted into law to specifically promote intermodal alternatives to the EAS program, DOT (1) convened a working group in 2011 to study this area and (2) added new language to its SCASDP 2012 Request for Proposals, such language that carried forward in the 2013 request, to clarify that intermodal solutions to air service—for example, cost-effective bus service—are eligible for grants. In 2009, we also suggested that Congress consider re-examining EAS program’s objectives and statutory requirements to include the possibility of assessing multimodal solutions for communities. Considering options to the current EAS program, such as multimodal transportation, may help Congress identify opportunities to limit the financial strain on the EAS program. GAO-06-21. under $100,000. However, according to DOT, this example would be considered as a type of in-kind contribution, which is discussed below. Marketing and advertising services—agreements whereby airports or communities purchase the marketing or advertising on behalf of the airline’s new service designed to build awareness for a new service and develop demand so that the service can become self-sustaining. Many small airports are located in multi-airport regions in which passengers will drive long distances to nearby airports to save on price, so such advertising is of increasing importance to attract passengers to fly from their local airport. For example, according to the 2009 TRB report, Huntsville, Alabama, used a SCASDP grant to support its airport’s “Huntsville Hot Ticket” program that sent e-mail fare alerts to customers when fare specials were announced, and allowed customers to book tickets directly on the airport’s flyhuntsville.com website. Non-financial (in-kind) contributions—assistance referring to products, goods, or services that otherwise might have to be paid for, but which third-party providers can donate instead. For example, local advertising firms may provide billboards or local media may provide newspaper or TV coverage. Each of these incentives has certain advantages and associated risks or disadvantages, but more airports in smaller communities tend to use revenue guarantees, likely because those communities recognize that they need to share in the airlines’ financial risk of serving smaller markets. However, few incentives tend to be undertaken as the only type of incentive, that is, for example, revenue guarantees are usually combined with other forms of incentives, such as cost or fee waivers. In addition, given that the service may fail, the use of federal funds to support the minimum revenue guarantees effectively requires the federal government to share this potential risk. In its 2008 review of SCASDP, the DOT OIG reported that airlines operating in small communities typically have limited resources to invest in marketing designed to stimulate demand, and using funds for marketing programs in support of other incentive programs— such as revenue guarantees or cost subsidies—can stimulate demand by increasing awareness of airport services and mitigate “leakage” of passengers to surrounding airports. Chairman LoBiondo, Ranking Member Larsen, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. For further information on this testimony, please contact Gerald L. Dillingham, Ph.D., at (202) 512-2834 or dillinghamg@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony statement include Paul Aussendorf, Assistant Director; Cathy Colwell, Assistant Director; Vashun Cole; Bonnie Pignatiello Leer; Joshua Ormond; and Amy Rosewarne. The following individuals made key contributions to the prior GAO related work include: Amy Abramowitz, Dave Hooper, John Mingus, and Sara Ann Moessbauer. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Establishing and retaining reliable air service to small communities has been a challenge for decades. Communities seek access to air transportation services as a driver for attracting investment and generating employment. To incentivize service, Congress established two programs to help support air service to small communities—EAS and SCASDP. Airports are categorized by DOT's Federal Aviation Administration and described in terms of “hub” size based on the number of passengers served annually. Airports range from large hubs with at least 7.3 million passengers in 2012 to nonprimary airports with fewer than 10,000 passengers. Airports receiving subsidized EAS service are either nonhub or nonprimary, and SCASDP airports are small hub or smaller. This testimony discusses (1) the airline industry factors affecting air service to small communities, (2) the federal programs and policies that support air service to small communities, and (3) other options for improving access to air service for these communities. The testimony is based on previous GAO reports issued from 2003 through 2014; analysis of industry data for years 2007 through 2013; and selected updates on EAS and SCASDP programs. To conduct these updates, GAO reviewed program documentation and interviewed DOT officials and industry representatives. Air service to small communities has declined since 2007 due, in part, to higher fuel costs and declining population, and for some communities, compounded by more attractive service (i.e., larger airports in larger cities) within driving distance. In fact, airports of all sizes have lost capacity in the number of available seats, and largely for flights as well. However, medium-hub and small-hub airports have proportionally lost more service than large-hub or nonhub airports (see figure). The two primary programs, designed to help small communities retain air service, administered by the Department of Transportation (DOT), face challenges. The Essential Air Service (EAS) program, which received about $232 million in 2013, provided subsidies to airlines that served 117 eligible non-Alaskan communities in 2013. For the most part, only airports in eligible communities that received EAS-subsidized service have experienced an increased number of flights since 2007. However, the service may not always be the most cost-effective option for connecting people to the national transportation network, and the total and per-community EAS subsidies have grown since 2008. Legislation to control costs was recently enacted which limited access to EAS, for example by changing eligibility requirements. The Small Community Air Service Development Program (SCASDP) is a grant program to help small communities enhance air service at small-hub or smaller airports. DOT can award no more than 40 grants a year, thus SCASDP assists fewer communities than does EAS. Further, unlike EAS, funding for SCASDP—$6 million in 2013—has decreased since the program was created in 2002. Past reviews of SCASDP's effectiveness have found mixed success, with about half or less of the grants achieving their goals. Multimodal and community-based approaches can be used to help small communities connect to the nation's transportation network. Multimodal solutions, such as bus access to larger airports or air taxi service, could be more cost-effective than current programs. In addition, some communities have had success with attracting air service through methods such as financial incentives and marketing support.
CMS is responsible for overseeing Medicaid and state Medicaid agencies are responsible for administering the program. Although each state is subject to federal requirements, it develops its own Medicaid administrative structure for carrying out the program including its approach to program integrity. Within broad federal guidelines, each state establishes eligibility standards and enrolls eligible individuals; determines the type, amount, duration, and scope of covered services; sets payment rates for covered services; establishes standards for providers and managed care plans; and ensures that state and federal funds are not spent improperly or diverted by fraudulent providers. However, state Medicaid programs do not work in isolation on program integrity; instead, there are a large number of federal agencies, other state entities, and contractors with which states must coordinate. Generally, each state’s Medicaid program integrity unit uses its own data models, data warehouses, and approach to analysis. States often augment their in-house capabilities by contracting with companies that specialize in Medicaid claims and utilization reviews. However, as program administrators, states have primary responsibility for conducting program integrity activities that address provider enrollment, claims review, and case referrals. Specifically, CMS expects states to collect and verify basic information on providers, including whether the providers meet state licensure requirements and are not prohibited from participating in federal health care programs  maintain a mechanized claims processing and information system known as the Medicaid Management Information System (MMIS). MMIS can be used to make payments and to verify the accuracy of claims, the correct use of payment codes, and a beneficiary’s Medicaid eligibility.  operate a Surveillance and Utilization Review Subsystem (SURS) in conjunction with the MMIS that is intended to develop statistical profiles on services, providers, and beneficiaries in order to identify potential improper payments. For example, SURS may apply automatic post-payment screens to Medicaid claims in order to identify aberrant billing patterns. submit all processed Medicaid claims electronically to CMS’s Medical Statistical Information System (MSIS). MSIS does not contain billing information, such as the referring provider’s identification number or beneficiary’s name, because it is a subset of the claims data submitted by states. States provide data on a quarterly basis and CMS uses the data to (1) analyze Medicaid program characteristics and utilization for services covered by state Medicaid programs, and (2) generate various public use reports on national Medicaid populations and expenditures. refer suspected overpayments or overutilization cases to other units in the Medicaid agency for corrective action and refer potential fraud cases to other appropriate entities for investigation and prosecution. Our reports and testimonies from 2001 through 2006 identified gaps in state program integrity activities and noted that the support provided by CMS to states was hampered by resource constraints. For example, in 2004, we reported that 15 of 47 states responding to our questionnaire did not affirm that they conducted data mining, defined as analysis of large data sets to identify unusual utilization patterns, which might indicate provider abuse. The DRA established the Medicaid Integrity Program to provide effective federal support and assistance to states to combat fraud, waste, and abuse. To implement the Medicaid Integrity Program, CMS created the Medicaid Integrity Group (MIG), which is now located within the agency’s Center for Program Integrity. The DRA also required CMS to hire contractors to review and audit provider claims and to educate providers on issues such as appropriate billing practices. The Medicaid Recovery Audit Contractor (RAC) program was established by PPACA. Each state must contract with a RAC, which is tasked with identifying and recovering Medicaid overpayments and identifying underpayments. Each state’s RAC is required to be operational by January 1, 2012. Medicaid RACs will be paid on a contingency fee basis—up to 12.5 percent—of any recovered overpayments and states are required to establish incentive payments for the detection of underpayments. Figure 1 identifies the key federal and state entities responsible for Medicaid program integrity. Fraud detection and investigations often require more specialized skills than are required for the identification of improper payments because investigators must establish that an individual or entity intended to falsify a claim to achieve some gain. As a result, fraud is more difficult to prove than improper payments and requires the involvement of entities that can investigate and prosecute fraud cases. In 1977, Congress authorized federal matching funds for the establishment of independent state Medicaid Fraud Control Units (MFCU). MFCUs are responsible for investigating and prosecuting Medicaid fraud. In general, they are located in State Attorneys Generals’ offices. MFCUs can, in turn, refer some cases to federal agencies that have longstanding responsibility for combating fraud, waste, and abuse in Medicare and Medicaid—the HHS’s Office of Inspector General (HHS-OIG), the Federal Bureau of Investigation (FBI), and the Department of Justice. A key challenge CMS faces in implementing the statutorily required federal Medicaid Integrity Program is ensuring effective coordination to avoid duplicating state program integrity efforts. CMS established the MIG in 2006 and it gradually hired staff and contractors to implement a set of core activities, including the (1) review and audit of Medicaid provider claims; (2) education of state program integrity officials and Medicaid providers; and (3) oversight of state program integrity activities and provision of assistance. Because states also routinely review and audit provider claims, the MIG recognized that coordination was key to avoiding duplication of effort. In 2011, the MIG reported that it was redesigning its national provider audit program to allow for greater coordination with states on data, policies, and audit measures. According to MIG data, overpayments identified by its review and audit contractors over the first 3 years of the national audit program were not commensurate with the contractors’ costs. The DRA provided CMS with the resources to hire staff whose sole duties are to assist states in protecting the integrity of the Medicaid program. The MIG’s core activities were implemented gradually from fiscal year 2006 to 2009. The DRA provided start up funding of $5 million for fiscal year 2006, increasing to $50 million for each of the subsequent 2 fiscal years, and $75 million per year for fiscal year 2009 and beyond. One of the first activities initiated by the MIG in fiscal year 2007 was comprehensive program integrity reviews to assess the effectiveness of states’ activities, which involved eight, week-long onsite visits that year. One of the last activities to be implemented was the statutorily required National Provider Audit Program where MIG contractors review and audit Medicaid provider claims. In fiscal year 2005, we reported that CMS devoted 8.1 full time equivalent staff years to support and oversee states’ anti-fraud-and-abuse operations, which, in 2010, had grown to 83 out of the 100 DRA authorized full time equivalent staff years. Table 1 describes six core MIG activities and the fiscal year in which those activities began. Figure 2 shows MIG expenditures by program category for fiscal year 2010. The Medicaid Integrity Institute accounted for about 2 percent of the MIG’s fiscal year 2010 expenditures, while the National Provider Audit Program accounted for about half of expenditures. At the outset, the MIG recognized that effective coordination with internal and external stakeholders was essential to the success of the Medicaid Integrity Program. In a report issued prior to establishment of the program, we found that CMS had a disjointed organizational structure and lacked the strategic planning necessary to face the risks involved with the Medicaid program. We identified the need for CMS to develop a strategic plan in order to provide direction to the agency, its contractors, states, and its law enforcement partners. In designing and implementing the program, the MIG convened an advisory committee consisting of (1) state program integrity, Medicaid, and MFCU directors from 16 states; and (2) representatives of the FBI, HHS-OIG, and CMS regional offices. This committee provided planning input and strategic advice and identified key issues that the MIG needed to address, including  The MIG’s efforts should support and complement states’ Medicaid integrity efforts, not be redundant of existing auditing efforts.  Program integrity activities of the MIG and other federal entities require coordination with states regarding auditing and data requests.  The focus of state activities should be shifted from postpayment audits to prepayment prevention activities. The advisory committee also highlighted the lack of state resources for staffing, technology, and training. CMS’s July 2009 Comprehensive Medicaid Integrity Plan, the fourth such plan since 2006, stated that fostering collaboration with internal and external stakeholders of the Medicaid Integrity Program was a primary goal of the MIG. In implementing more recent statutory requirements, CMS again stressed the need for effective coordination and collaboration. CMS’s commentary accompanying the final rule on the implementation of Medicaid RACs acknowledged the potential for duplication with states’ ongoing efforts to identify Medicaid overpayments. States have been responsible for the recovery of all identified overpayments, including those identified since fiscal year 2009 by the MIG’s audit contractors. The new requirement for states to contract with an independent Medicaid RAC introduces another auditor to identify and collect Medicaid overpayments. The Medicaid RAC program was modeled after a similar Medicare program, which was implemented in March 2009 after a 3-year demonstration. Because Medicare RACs are paid a fixed percentage of the dollar value of any improper payments identified, they generally focused on costly services such as inpatient hospital stays. Our prior work on Medicare RACs noted that the postpayment review activities of CMS’s other contractors would overlap less with the RACs’ audits if those activities focused on different Medicare services where improper payments were known to be high, such as home health. Because Medicaid RACs are not required to be operational until January 1, 2012, the extent to which states will structure their RAC programs to avoid duplication and complement their own provider review and audit activities remains to be seen. In its most recent annual report to the Congress, the MIG indicated that it was redesigning the National Provider Audit Program. According to the MIG, the National Provider Audit Program has not identified overpayments in the Medicaid program commensurate with the related contractor costs. About 50 percent of the MIG’s $75 million annual budget supports the activities of its review and audit contractors. From fiscal years 2009 through 2011, the MIG authorized 1,663 provider audits in 44 states. However, the MIG’s reported return on investment from these audits was negative. While its contractors identified $15.2 million in overpayments, the combined cost of the National Provider Audit Program was about $36 million in fiscal year 2010. The actual amount of overpayments recovered is not known because states are responsible for recovering overpayments and the MIG is not the CMS entity that tracks recoveries. Actual recoveries may be less than the identified overpayments. The National Provider Audit Program has generally relied on MSIS, which is summary data submitted by states on a quarterly basis that may not reflect voided or adjusted claims payments. As a result, the MIG’s audit contractors may identify two MSIS claims as duplicates when the state has already voided or denied payment on one of these claims. For their program integrity efforts, states use their own MMIS data systems, which generally reflect real-time payments and adjustments of detailed claims for each health care service. States are required to have a SURS component that performs data mining as a part of their program integrity efforts. The MIG’s review contractors use data mining techniques that may be similar to those employed by states, and they may not identify any additional improper claims. Moreover, MIG officials told us that the National Provider Audit Program did not prioritize the activities according to the dollar amount of the claim, that is, it did not concentrate its efforts on audits with the greatest potential for significant recoveries. Although the amount of overpayment identified from any given audit can vary by thousands or millions of dollars, the MIG’s comprehensive reviews of several states’ Medicaid integrity programs show that these states identified significantly higher levels of overpayments in 1 year than the National Provider Audit Program identified over 3 years. For example, the number of national provider audits (1,663) over three fiscal years was similar to the number that New York conducted in fiscal year 2008 (1,352), yet CMS reported that New York had identified more than $372 million in overpayments— considerably more than the $15.2 million identified through national provider audits. The MIG’s proposed redesign of the National Provider Audit Program appears to allow for greater coordination between its contractors and states on a variety of factors, including the data to be used. In fiscal year 2010, the MIG launched collaborative audits in 13 states. For these audits, the states and the MIG agreed on the audit issues to review and, in some cases, states provided the MIG’s audit contractors with more timely and complete claims data. These collaborative projects (1) allowed states to augment their own audit resources, (2) addressed audit targets that states may not have been able to initiate because of a lack of staff, and (3) provided data analytic support for states that lacked that capability. Although these activities are ongoing and the results have not yet been finalized, such collaborative projects appear to be a promising approach to audits that avoids a duplication of federal and state efforts. It remains to be seen, however, whether these changes will result in an increase in identified overpayments. While the MIG’s audit program is challenged to avoid duplicating states’ own audit activities, its other core functions present an opportunity to enhance states’ efforts. The MIG’s state oversight activities are extensive and labor intensive. Although the data collected during reviews and assessments are not always consistent with each other, these oversight activities have a strong potential to inform the MIG’s technical assistance and help identify training opportunities. The Medicaid Integrity Institute appears to address an important state training need. The MIG’s core oversight activities—triennial comprehensive state program integrity reviews and annual assessments—are broad in scope and provide a basis for the development of appropriate technical assistance. However, we found that the information collected during reviews and the information collected from assessments was sometimes inconsistent with each other. As of November 2011, the MIG had completed the first round of reviews for 50 states and had initiated a second round of reviews in 10 states. The reviews cover the entirety of a state’s program integrity activities and assess compliance with federal regulations. In advance of the MIG’s week-long onsite visit, state program integrity officials are asked to respond to a 71-page protocol containing 195 questions and to provide considerable documentation. Table 2 summarizes the topics covered in the protocol. Typical compliance issues and vulnerabilities identified during the reviews include provider enrollment weaknesses, inadequate oversight of providers in Medicaid managed care, and ineffective fraud referrals to state MFCUs. Much of the information collected during the assessments—Medicaid program integrity characteristics, program integrity planning, prevention, detection, investigation and recoveries—is also collected during the triennial comprehensive reviews. In addition, we found inconsistencies between the information reported in the comprehensive reviews and in the assessments for several states that were conducted at about the same time. For example, there was a significant discrepancy for one state in the number of staff it reported as being dedicated to program integrity activities. According to the MIG, knowing the size of state program integrity staff helps it to more appropriately tailor content during training events. Improved consistency will help the MIG ensure that it is targeting its training and technical assistance resources appropriately. Despite the frequency of the annual assessments, the most current data cover fiscal year 2008, which the MIG began collecting in fiscal year 2010. Although the MIG provides states with a glossary explaining each of the requested data elements, it is not clear that the information submitted is reliable or comparable across states. Our review of a sample of assessments revealed missing data and a few implausible measures, such as one state reporting over 38 million managed care enrollees. In other states, there were dramatic changes in the data reported from 2007 to 2008, which either raises a question about the reliability of the data or suggests that states be allowed to explain significant changes from year to year. For example, the number of audits in one state declined from 203 to 35. According to MIG officials, the comprehensive reviews and the assessments inform the MIG’s technical assistance activities with the states. For example, we found that the MIG published best practices guidance in 2008 after finding weaknesses in coordination between state program integrity officials and their respective MFCU’s in a number of states. In its report to Congress on fiscal year 2010 activities, the MIG indicated it completed 420 requests for technical assistance from 43 states, providers, and others. The most common topics included the National Provider Audit Program, policy and regulatory requirements on disclosures, provider exclusions and enrollment, and requests for statistical assistance related to criminal and civil court actions. Examples of assistance provided to the states by the MIG included (1) hosting regional state program integrity director conference calls to discuss program integrity issues and best practices; and (2) helping develop a State Medicaid Director Letter (issued in July 2010) on the return of federal share of overpayments under PPACA. The federally sponsored Medicaid Integrity Institute not only offers state officials free training but also provides opportunities to develop relationships with program integrity staff from other states. The institute addresses our prior finding that CMS did not sponsor any fraud and abuse workshops or training from 2000 through 2005. From fiscal years 2008 through 2012, the institute will have trained over 2,265 state employees at no cost to states. Given the financial challenges states currently face, it is likely that expenditures for training and travel are limited. Expenditures on the institute accounted for about $1.3 million of the MIG’s $75 million annual budget. MIG officials told us that states uniformly praised the opportunity to network and learn about best practices from other states. A special June 2011 session at the institute brought together Medicaid program integrity officials and representatives of MFCUs from 39 states in an effort to improve the working relations between these important program integrity partners. In addition to the institute, the MIG has a contractor that provides (1) education to broad groups of providers and beneficiaries, and (2) targeted education to specific providers on certain topics. For example, the education contractor has provided outreach through its attendance at 17 conferences with about 36,000 attendees. These conferences were sponsored by organizations devoted to combating health care fraud such as the National Association of Medicaid Program Integrity and National Health Care Anti-Fraud Association, as well as meetings of national and regional provider organizations (hospital, home care and hospice and pharmacy). An example of a more targeted activity is one focused on pharmacy providers. The MIG’s education contractor is tasked with developing provider education materials to promote best prescribing practices for certain therapeutic drug classes and remind providers of the appropriate prescribing guidelines based on FDA approved labeling. The education program includes some face-to-face conversations, mailings to providers, and distribution of materials on a website and at conferences and meetings. These activities are collaborative efforts with the states so that states are: aware of the aberrant providers, participate in the education program, and can implement policy changes to address these issues, as appropriate. We discussed the facts in this statement with CMS officials. Chairmen Pratts and Gowdy, this concludes my prepared remarks. I would be happy to answer any questions that you or other Members may have. For further information about this statement, please contact Carolyn L. Yocom at (202) 512-7114 or yocomc@gao.gov. Contact points for our Offices of Congressional Relation and Public Affairs may be found on the last page of this statement. Walter Ochinko, Assistant Director; Sean DeBlieck; Iola D’Souza; Leslie V. Gordon; Drew Long; Jessica Smith; and Jennifer Whitworth were key contributors to this statement. Fraud Detection Systems: Additional Actions Needed to Support Program Integrity Efforts at Centers for Medicare and Medicaid Services. GAO-11-822T. Washington, D.C.: July 12, 2011. Fraud Detection Systems: Centers for Medicare and Medicaid Services Needs to Ensure More Widespread Use. GAO-11-475. Washington, D.C.: June 30, 2011. Improper Payments: Recent Efforts to Address Improper Payments and Remaining Challenges. GAO-11-575T. Washington, D.C.: April 15, 2011. Status of Fiscal Year 2010 Federal Improper Payments Reporting. GAO-11-443R. Washington, D.C.: March 25, 2011. Medicare and Medicaid Fraud, Waste, and Abuse: Effective Implementation of Recent Laws and Agency Actions Could Help Reduce Improper Payments. GAO-11-409T. Washington, D.C.: March 9, 2011. Medicare: Program Remains at High Risk Because of Continuing Management Challenges. GAO-11-430T. Washington, D.C.: March 2, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. Medicare Recovery Audit Contracting: Weaknesses Remain in Addressing Vulnerabilities to Improper Payments, Although Improvements Made to Contractor Oversight. GAO-10-143. Washington, D.C.: March 31, 2010. Medicaid: Fraud and Abuse Related to Controlled Substances Identified in Selected States. GAO-09-1004T. Washington, D.C.: September 30, 2009. Medicaid: Fraud and Abuse Related to Controlled Substances Identified in Selected States. GAO-09-957. Washington, D.C.: September 9, 2009. Improper Payments: Progress Made but Challenges Remain in Estimating and Reducing Improper Payments. GAO-09-628T. Washington, D.C.: April 22, 2009. Medicaid: Thousands of Medicaid Providers Abuse the Federal Tax System. GAO-08-239T. Washington, D.C.: November 14, 2007. Medicaid: Thousands of Medicaid Providers Abuse the Federal Tax System. GAO-08-17. Washington, D.C.: November 14, 2007. Medicaid Financial Management: Steps Taken to Improve Federal Oversight but Other Actions Needed to Sustain Efforts. GAO-06-705. Washington, D.C.: June 22, 2006. Medicaid Integrity: Implementation of New Program Provides Opportunities for Federal Leadership to Combat Fraud, Waste, and Abuse. GAO-06-578T. Washington, D.C.: March 28, 2006. Medicaid Fraud and Abuse: CMS’s Commitment to Helping States Safeguard Program Dollars Is Limited. GAO-05-855T. Washington, D.C.: June 28, 2005. Medicaid Program Integrity: State and Federal Efforts to Prevent and Detect Improper Payments. GAO-04-707. Washington, D.C.: July 16, 2004. Medicaid: State Efforts to Control Improper Payments. GAO-01-662. Washington, D.C.: June 7, 2001. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Centers for Medicare & Medicaid Services (CMS), the federal agency that oversees Medicaid, estimated that improper payments in the federal-state Medicaid program were $21.9 billion in fiscal year 2011. The Deficit Reduction Act of 2005 established the Medicaid Integrity Program and gave CMS an expanded role in assisting and improving the effectiveness of state activities to ensure proper payments. Making effective use of this expanded role, however, requires that federal resources are targeted appropriately and do not duplicate state activities. GAO was asked to testify on Medicaid program integrity. GAO's statement focuses on how CMS's expanded role in ensuring Medicaid program integrity (1) poses a challenge because of overlapping state and federal activities regarding provider audits and (2) presents opportunities through oversight to enhance state program integrity efforts. To do this work, GAO reviewed CMS reports and documents on Medicaid program integrity as well as its own and others' reports on this topic. In particular, GAO reviewed CMS reports that documented the results of its state oversight and monitoring activities. GAO also interviewed CMS officials in the agency's Medicaid Integrity Group (MIG), which was established to implement the Medicaid Integrity Program. This work was conducted in November and December 2011. GAO discussed the facts in this statement with CMS officials. The key challenge faced by the Medicaid Integrity Group (MIG) is the need to avoid duplication of federal and state program integrity efforts, particularly in the area of auditing provider claims. In 2011, the MIG reported that it was redesigning its national provider audit program. Previously, its audit contractors were using incomplete claims data to identify overpayments. According to MIG data, overpayments identified by its audit contractors since fiscal year 2009 were not commensurate with its contractors' costs. The MIG's redesign will result in greater coordination with states on a variety of factors, including the data to be used. It remains to be seen, however, whether these changes will result in an increase in identified overpayments. The table below highlights the MIG's core oversight activities, which were implemented from fiscal years 2007 through 2009. The MIG's core oversight activities present an opportunity to enhance state efforts through the provision of technical assistance and the identification of training opportunities. The MIG's assessment of state program integrity efforts during triennial onsite reviews and annual assessments will need to address data inconsistencies identified during these two activities. Improved consistency will help ensure that the MIG is appropriately targeting its resources. The Medicaid Integrity Institute appears to address a state training need and create networking opportunities for program integrity staff.
The JIAC includes elements of three intelligence operations centers—one supporting EUCOM, a second supporting U.S. Africa Command, and a third supporting the North Atlantic Treaty Organization—as well as several other organizations that perform intelligence-related functions. According to DOD guidance, joint intelligence operations centers support the geographical combatant commands and other defense organizations, serving as focal points for intelligence planning, collection management, analysis, and production. EUCOM Joint Intelligence Operations Center Europe executes intelligence operations that are synchronized and integrated with theater component, national, and partner nation organizations; enables EUCOM planning and execution; and enhances senior leaders’ decision-making across the entire spectrum of military operations. U.S. Africa Command Directorate for Intelligence at RAF Molesworth manages and executes defense intelligence for U.S. Africa Command, including protecting U.S. personnel and facilities, preventing and mitigating conflict, and building defense capabilities in order to promote regional stability and prosperity. North Atlantic Treaty Organization Intelligence Fusion Center provides intelligence to warn of potential crises and to support the planning and execution of the North Atlantic Treaty Organization’s operations. Regional Joint Intelligence Training Facility trains students from EUCOM, U.S. Africa Command, and the North Atlantic Treaty Organization nations, including the United Kingdom. United States Battlefield Information Collection and Exploitation Systems plans, builds, and operates the Coalition Intelligence and Information Enterprise to provide on-demand coalition information- sharing solutions for both episodic and enduring missions. A number of DOD organizations have been involved in the JIAC consolidation process. Overall guidance for DOD’s military construction efforts was provided by the Office of the Assistant Secretary of Defense for Energy, Installations, and Environment. This office is responsible for overseeing various aspects of the department’s military construction efforts. These responsibilities include, among other things, monitoring the execution of the military construction program to ensure the most efficient, expeditious, cost-effective accomplishment of the program, and issuing guidance for the implementation of DOD military construction policy. Other DOD organizations—including U.S. Air Force Headquarters, the Basing Office of the Office of the Secretary of Defense, and the headquarters of both EUCOM and U.S. Africa Command—made up the team that conducted the JIAC analysis of alternatives. The participating organizations provided subject matter experts who were involved in the team’s day-to-day work and developed the analysis that is the foundation of the decision to consolidate the JIAC at RAF Croughton. DOD’s team conducted work from the initial concept proposal in the fall of 2009 to the Resource Management Decision issued by the Secretary of Defense in April 2013. In July 2016, we reported on DOD’s analysis of alternatives process and recommended that the Secretary of Defense direct the Assistant Secretary of Defense for Energy, Installations, and Environment to develop guidance requiring the use of best practices for analysis of alternatives—including those practices we identified in the report—and that in this guidance, the Assistant Secretary should define the types of military construction decisions for which use of these best practices should be required. DOD did not agree with our recommendation, stating that the best practices do not wholly apply to decision-making processes for military construction projects. Table 1 lists the roles and responsibilities of DOD components related to the JIAC consolidation, including their involvement in preparing information in response to congressional requests for information on the analysis of alternatives process and on Lajes Field as a possible location for the JIAC. In response a statutory requirement, DOD issued a memorandum that certified that Lajes Field, Azores (Portugal) was not the optimal location for the JIAC, based on an analysis of U.S. operational requirements and an evaluation of key criteria. The Azores is an autonomous region of Portugal situated about 850 miles west of continental Portugal. There are nine major Azorean islands, including Terceira, home of Lajes Field. Lajes Field is a dual military and civilian airfield and is also a Portuguese military base; the 65th Air Base Group, a U.S Air Force unit, is also stationed there. The 65th Air Base Group’s mission supports DOD, allied nations, and other authorized aircraft in transit; its core mission is to service in-transit aircraft en route to eastern and southern destinations. In 2010, the Air Force recommended a plan to reduce personnel and operations at Lajes Field and divest approximately 500 U.S. military and civilian billets, leaving approximately 165 U.S. personnel at Lajes Field to support the mission requirements. The Secretary of Defense approved the recommendation and announced his decision to streamline Lajes Field in October 2012. According to the Secretary of Defense, the frequency and volume of flights using Lajes Field had decreased, and the base was operating well below its capacity. The Air Force recommended reducing U.S. operations at Lajes from 24 hours, 7 days a week to 8 hours, 7 days a week and downsizing the 65th Air Base Wing to an Air Base Group. According to the Secretary of Defense, the presence at Lajes Field exceeded mission requirements, and the mission requirements at Lajes Field could be supported with a smaller force. DOD has estimated that the streamlined footprint would yield approximately $35 million in annual savings. This Secretary of Defense’s October 2012 decision was subsequently reaffirmed by the European Infrastructure Consolidation assessment, which the Secretary of Defense initiated on January 25, 2013, to perform a comprehensive review of DOD facilities in Europe. The National Defense Authorization Act for Fiscal Year 2014 required that the Secretary of Defense provide Congress with certification that the actions taken to realign military forces at Lajes Field were supported by the European Infrastructure Consolidation assessment. The act required that DOD’s certification include an assessment of the efficacy of Lajes Field in support of the United States’ overseas force posture. On January 6, 2015, the Secretary of Defense issued a memorandum certifying that the European Infrastructure Consolidation assessment supported DOD’s plan to adjust its presence at Lajes Field. Further, the Secretary of Defense noted that DOD had conducted a comprehensive review and determined that the reduction of U.S. personnel at Lajes Field supported the U.S. military’s European force posture. DOD officials said that they will continue to provide 24-hour-a-day, 7-day- a-week tower operations at Lajes Field after the base personnel are reduced, along with crash, fire, and rescue services and attendant airfield operations for the joint Portuguese military and civilian airfield. Air Force officials told us that, as of July 2016, U.S. flights to Lajes Field average two per day. Air Force officials told us that, as part of this effort to streamline and reduce the personnel at Lajes Field, they have identified various excess buildings, facilities, and housing units that are no longer needed. Specifically, Air Force officials have identified 350 housing units as excess and are in the process of returning those units back to the Portuguese government; the remaining housing will be used to support unaccompanied personnel at Lajes Field. In July 2016, Air Force officials said that they were in negotiations with the Portuguese government on the return of excess facilities per the personnel streamlining efforts. Specifically, the officials stated that there was disagreement on how the United States and Portugal interpret the technical agreement and policy on the return of excess facilities—the United States’ position was that all property excess to the needs of the United States would be turned over to Portugal in useable condition, while Portugal sought for the United States to demolish a majority of the facilities rather than returning them. According to these officials, Portugal also sought environmental remediation commitments from the United States. According to DOD officials, the demolition and environmental remediation of these facilities was contrary to long-standing DOD policy. Our assessment of the Air Force’s February 2015 cost estimate for JIAC consolidation showed that it did not fully meet cost estimation best practices. According to the GAO Cost Estimating and Assessment Guide, a cost estimate created using best practices exhibits four characteristics—it is comprehensive, well documented, accurate, and credible. Each of the four characteristics is associated with a specific set of best practices. Our assessment found that the JIAC cost estimate partially met three and minimally met one of the four characteristics of a reliable cost estimate. If any of these characteristics is not substantially or fully met, then the cost estimate does not reflect the characteristics of a high-quality estimate and cannot be considered reliable. Table 2 lists each of the four characteristics, along with our summary assessment of the JIAC cost estimate. The following summarizes our analysis of the JIAC cost estimate for each of the four characteristics. Appendix I provides greater detail on our assessment. Comprehensiveness (Partially Met). According to best practices, agencies should develop accurate life-cycle cost estimates. Further, a life-cycle cost estimate should encompass all past (or sunk), present, and future costs for every aspect of the program, regardless of funding source—including all government and contractor costs. However, the JIAC cost estimate included only MILCON costs and did not include costs associated with the life cycle of the project. Air Force and Office of the Secretary of Defense officials said they do not consider the JIAC cost estimate a life-cycle cost estimate and that the estimate’s scope is in line with DOD guidance on the development of budget requests for MILCON projects. According to DOD officials, the department does not have a full life-cycle cost estimate for the entire JIAC consolidation effort. DOD and Air Force officials further stated that the estimate covers the costs of infrastructure for the JIAC’s facilities, other supporting infrastructure (e.g., utilities serving the JIAC facilities), and certain other facilities for functions related to JIAC, such as family support (e.g., expanded capacity of the child development center). The associated Operation and Maintenance costs (e.g., family support costs like living allowances) were not included in the estimate, because they were considered out of scope, according to DOD and Air Force officials. However, without fully accounting for life- cycle costs, management may have difficulty successfully planning and programming resource requirements for the JIAC consolidation and making sound decisions. Documentation (Partially Met). According to best practices, documentation is essential for validating and defending a cost estimate. The JIAC cost estimate is generally consistent with the sizing assumptions included in DOD documentation, laying out requirements for the JIAC’s facilities, inputs from appropriate experts, and relevant DOD guidance, such as the DOD Facilities Pricing Guide. Also, DOD documented both the data sources and the methodology used for the JIAC cost estimate in an Excel spreadsheet model and a parametric cost engineering estimate summary. However, the documentation for the JIAC cost estimate is not complete. Specifically, the cost estimate does not provide sufficient documentation so that a cost analyst unfamiliar with the program could understand what had been done and replicate it. The cost estimate uses the DOD Facilities Pricing Guide (heretofore referred to as the Pricing Guide), which provides planning assumptions and prices for a variety of types of facilities, such as office buildings. In the cost calculation spreadsheet for the JIAC cost estimate, the cost estimators’ judgements regarding which type of facility to use from the Pricing Guide were not always consistent with the categories listed in the Pricing Guide. For example, there is no mention of intelligence facilities in the Pricing Guide, and we were unable to independently trace all of the unit costs from the JIAC cost estimate back to it. Air Force officials were able to show where in the Pricing Guide these numbers were drawn from; however, this is not documented in the estimating model, and there is no rationale provided to show that an intelligence facility would be the same as a communications center. Without a well-documented cost estimate, the Air Force may not be able to present a convincing argument for the validity of the JIAC cost estimate and answer decision makers’ and oversight groups’ questions. Accuracy (Partially Met). According to best practices, the cost estimate should provide results that are unbiased, and it should not be overly conservative or overly optimistic. An estimate is accurate when it is based on an assessment of most likely costs, adjusted properly for inflation, and contains no more than a few minor mistakes—if any. In addition, a cost estimate should be updated regularly to reflect changes in the program, for example when schedules or other assumptions change or actual costs change, so that it always reflects the current status. The JIAC estimate used historical data, did not contain mathematical errors, showed evidence of being updated, underwent a review process before final approval, and follows DOD construction cost estimation guidance on how to account for inflation. However, while the JIAC cost estimate has been updated, it has not been updated regularly. Specifically, the April 2013 estimate was updated in June 2013 to align with an update to the Pricing Guide but was not updated to align with the two subsequent updates to the Pricing Guide that occurred before the February 2015 JIAC cost estimate was submitted. Air Force officials said that the JIAC cost estimate was updated to reflect foreign currency fluctuations. According to these officials, the MILCON process assumes flexibility in the project timeline to allow adjustments to the estimate and focuses on establishing the project’s scope (e.g., the square feet or square meters associated with a project) in the Air Force’s project development justification forms. These Air Force officials also stated that the costs associated with MILCON projects are updated only with significant changes to the program and are typically permitted to be adjusted by as much as plus or minus 25 percent of the total costs— even after funding has been appropriated—without needing to be reprogrammed. While updating an estimate in this way may be permissible within the established MILCON process, it is not consistent with cost estimating best practices, because the estimate is not updated regularly. Without updating the JIAC cost estimate on a regular basis, DOD and the Air Force may have difficulty analyzing changes in program costs for the consolidation project and may hinder the collection of up-to-date cost and technical data to support future JIAC cost estimates. Credibility (Minimally Met). According to best practices, a cost estimate should discuss any limitations of the analysis resulting from uncertainty or biases surrounding data or assumptions. Major assumptions should be varied, and other outcomes recomputed, to determine how sensitive the cost estimates are to changes in the assumptions. Also, risk and uncertainty analysis should be performed to determine the level of risk associated with the estimate. Without a sensitivity analysis that reveals how a cost estimate is affected by a change in a single assumption, the cost estimator will not fully understand which variable most affects the cost estimate. The use of a sensitivity analysis is not specified in cost estimation guidance for MILCON projects from either DOD or the Air Force, and the JIAC cost estimate did not include such an analysis. According to Office of the Secretary of Defense and Air Force officials, a sensitivity analysis is part of the underlying unit cost development, because costs are developed through the use of both historical data and industry averages. These officials further stated that the Office of the Secretary of Defense uses actual data underpinned by relevant sensitivity and range analyses to develop its cost estimates. For example, Office of the Secretary of Defense and Air Force officials said that the Office of the Secretary of Defense uses the DOD Selling Price Index—which averages three commonly accepted national indexes for construction price escalation—to calculate actual project award cost data. However, for sensitivity analysis to be useful in informing decisions, careful assessment of the underlying risks and supporting data related to a specific MILCON project is also necessary. In addition, the sources of the variation should be well documented and traceable. Without conducting sensitivity analysis for the JIAC cost estimate to identify the effect of uncertainties associated with different assumptions, DOD and the Air Force increase the risk that decisions will be made without a clear understanding of the effects of these assumptions on costs. Another key to establishing an estimate’s credibility is its review process. According to best practices, the estimate’s cost drivers should be crosschecked, and an independent cost estimate conducted by a group outside the acquiring organization should be developed to determine whether other estimating methods produce similar results. While the Air Force has a review process, the review it conducted for the JIAC cost estimate did not include the use of a checklist provided as a sample in DOD MILCON cost estimation guidance. The sample checklist, while not required, could have helped the Air Force to confirm the validity of assumptions and the logic used in estimating the cost of the JIAC construction tasks. Air Force officials stated that their review primarily looks at the numbers provided and the ranges from the Pricing Guide to see whether the estimate is within those ranges. These officials added that they would use the checklist only if there was a difference from the Pricing Guide. However, the first phase of the JIAC cost estimate did not identify the stage of the estimate; did not separate costs for labor, equipment, or material; and did not calculate prime and subcontractor profit by the weighted guidelines method, which are items listed in the sample checklist. When we shared the results of our analysis with officials from the Office of the Secretary of Defense, they said that they did not agree our best practices for cost estimating were entirely applicable to the JIAC cost estimate, since the estimate focused on MILCON costs. Furthermore, Office of the Secretary of Defense and Air Force officials said that construction is discussed in our Cost Estimating and Assessment Guide as a subsidiary cost to be included in the life-cycle cost estimate. For example, these officials said that construction costs are to be considered as part of the overall ground rules and assumptions for a cost estimate. However, the methodology outlined in our Cost Estimating and Assessment Guide is a compilation of best practices that federal cost estimating organizations and industry use to develop and maintain reliable cost estimates, and this methodology can be used across the federal government for developing, managing, and evaluating capital program cost estimates, including military construction estimates. Furthermore, DOD guidance for estimating construction costs states that in the MILCON program, construction cost estimates are prepared throughout the planning, design, and construction phases of a construction project. These construction cost estimates are categorized as follows: programming estimate, concept estimate, final estimate, and government estimate. The Air Force provided us with the JIAC consolidation programming estimate for analysis, because it was the most complete and updated estimate at the time of our review. Even though our analysis shows that the programming estimate did not meet all of the four characteristics of a high-quality, reliable estimate, the Air Force will have opportunities to incorporate our best practices as it prepares future cost estimates for subsequent phases of the JIAC consolidation program. Without incorporating a methodology that is more closely aligned with our best practices for cost estimation and incorporates all four characteristics of a high-quality, reliable estimate, the Air Force will not be providing comprehensive and high-quality information for decision makers to use. After its 2013 decision to consolidate the JIAC at RAF Croughton, DOD conducted multiple reviews to provide information on Lajes Field as a potential alternative location for the JIAC, in response to congressional interest and inquiries. These reviews were developed by different organizations within DOD during 2015 and 2016 and included both one- time and recurring costs. The reviews produced different cost estimates, in particular for communications infrastructure and housing, because the DOD organizations that developed the reviews used different assumptions. However, all of the reviews found that consolidating the JIAC at Lajes Field would be more costly than consolidating it at RAF Croughton. Additionally, in response to statutory requirements, DOD issued a memorandum certifying that the department had determined that RAF Croughton was the optimal location for the JIAC and, conversely, that Lajes Field was not the optimal location, given the JIAC’s operational requirements. From 2015 through 2016, DOD conducted multiple reviews of Lajes Field as a potential location for the JIAC, in response to congressional interest and inquiries. Lajes Field was not originally included in DOD’s analysis of alternatives for the consolidation of the JIAC. DOD officials told us that the reviews of 2015 and 2016 were not conducted with the same level of rigor as a formal cost estimate, because DOD had already completed its analysis of alternatives, and the decision to consolidate JIAC at RAF Croughton had already been made. DOD officials also told us that no credible new evidence had been produced to indicate the department should revisit its initial decision. Figure 1 includes the key events and reviews related to DOD’s analysis of the JIAC, including reviews related to Lajes Field, the European Infrastructure Consolidation study, JIAC consolidation, authorization, and appropriations, and the execution of the JIAC consolidation project. According to officials from the Office of the Secretary of Defense, DOD did not alter or change its original decision to consolidate the JIAC at RAF Croughton based on the results of these reviews and found that consolidating the JIAC at Lajes Field would be more costly than consolidating it at RAF Croughton. Additionally, according to the Deputy Secretary of Defense, DOD’s reviews determined that Lajes Field was not a suitable location for the JIAC, based both on operational requirements and costs including housing availability and the lack of adequate secure communications infrastructure. These reviews were led by EUCOM, CAPE, and DISA. EUCOM’s September 2015 review. EUCOM developed an analysis comparing RAF Croughton with Lajes Field as potential locations for the JIAC. EUCOM officials told us that this review was in response to congressional interest and requests, and it included inputs from U.S. Air Forces Europe and officials at Lajes Field. The review compared cost estimates associated with locating JIAC at RAF Croughton with those associated with locating it at Lajes Field. These cost estimates included one-time costs, such as construction costs for the JIAC facilities, communications infrastructure, and housing, as well as recurring costs—including sustainment costs for base and communications infrastructure. The review estimated the one-time costs associated with locating the JIAC at RAF Croughton at $357 million and the one-time costs associated with locating it at Lajes Field at $1.65 billion. For recurring costs, the review estimated that RAF Croughton would cost approximately $68 million annually and Lajes Field approximately $94 million annually. The largest differences between the cost estimates were in the one-time costs for the communications and housing infrastructure necessary to support the JIAC. Officials from the Office of the Secretary of Defense told us that DOD had provided this review, with its appendixes, to the House and Senate Armed Services Committees and the House and Senate Appropriations Committees in September 2015. This review included two appendixes developed by DISA and DIA on the communications infrastructure needed to support the JIAC. DISA’s July 2015 Azores Telecommunications Feasibility Report provided an analysis of the telecommunications infrastructure on the Azores Islands. This report indicated that the Azores did not have sufficient communications infrastructure to be a feasible location for a DISA telecommunications hub. The DIA Azores Communications Cost Estimate provided a brief summary on the current and proposed communications systems within the Azores Islands, as well as the costs associated with modernizing the systems. The appendix noted that it was developed in response to a request from the DIA Office of Congressional Affairs. For communications infrastructure, the DIA estimated that locating the JIAC at Lajes Field would require approximately $449 million in one-time costs and $32.7 million in recurring annual sustainment costs. CAPE’s April 2016 cost verification for the JIAC. CAPE conducted an independent review of the cost estimates presented in EUCOM’s September 2015 review and those developed by the House Permanent Select Committee on Intelligence for its July 2015 review. CAPE developed its own cost assumptions, which included housing and communications infrastructure costs, among other things, in its review of the cost calculations in the EUCOM and House Permanent Select Committee on Intelligence reviews, which produced alternative cost totals. CAPE officials told us that this review was in response to direction from the Deputy Secretary of Defense and that they briefed this review to the House Permanent Select Committee on Intelligence in May 2016 and the House and Senate Armed Services Committees in April 2016. CAPE’s review estimated the one-time costs associated with locating the JIAC at RAF Croughton at $356 million and one-time costs associated with locating it at Lajes Field at $1.43 billion (compared with EUCOM’s estimates of $357 million and $1.65 billion respectively). For recurring costs, CAPE’s review estimated that RAF Croughton would cost approximately $53 million annually and Lajes Field approximately $59 million annually (compared with EUCOM’s estimates of $68 million and $94 million, respectively). DISA’s May 2016 review on the JIAC communications infrastructure requirements. In this update to its July 2015 review, DISA assessed and compared the communications infrastructures at RAF Croughton and Lajes Field with the intelligence mission support requirements, including the communications and technical requirements for the JIAC. DISA officials told us that this review included more refined cost estimates for the communications infrastructure than prior estimates and reflected new technical standards, such as operational bandwidth requirements. The review found that the communications infrastructure at Lajes Field did not meet technical and critical infrastructure requirements. To upgrade the communications infrastructure at Lajes Field, the review estimated a minimum of $267.7 million in one-time costs to procure and install three undersea cables and $6.8 million in annual sustainment costs. For locating the JIAC at RAF Croughton, the review determined that no procurement would be required and estimated sustainment costs for the communications infrastructure at $5.5 million annually. DOD officials told us that they briefed the results of this review to the House Armed Services Committee in September 2016. DOD’s multiple reviews of Lajes Field as an alternative location for the JIAC produced different cost estimates, because these reviews relied on different assumptions in developing the cost estimates for communications infrastructure and housing. DOD’s multiple reviews provide different cost estimates for the communications infrastructure that would be needed to support the JIAC at Lajes Field, because the reviews relied on different assumptions. Specifically, the reviews varied in the costs they included and the number of annual fiber cable breaks they expected would occur, among other details. The three reviews all assumed that three new fiber cables would be needed for Lajes Field. However, the distribution of these fiber cables differs in the reviews. Specifically, the September 2015 EUCOM review and the May 2016 DISA review assume one fiber cable from Lajes Field to mainland Portugal, one fiber cable to the United Kingdom, and one fiber cable to the United States, while the April 2016 CAPE review assumes two fiber cables from Lajes Field to the United States and one fiber cable to the United Kingdom. DOD officials told us that the cable distribution cited in the September 2015 EUCOM and the May 2016 DISA reviews reflect the current JIAC operational requirements, based on a May-June 2015 DIA operational assessment and that the CAPE review reflects a different JIAC operational design. Officials from DIA said that this change in operational requirements was made in various discussion sessions conducted among subject matter experts, and that the decision was not documented. Figure 2 shows the current cable configuration at Lajes Field and the new cables that would be necessary based on the September 2015 EUCOM review, the May 2016 DISA review, and the April 2016 CAPE review. The May 2016 DISA review had the lowest estimate for communications costs of the three reviews. DISA officials told us that these cost estimates were deliberately built on assumptions that would generate the lowest possible costs. However, DOD officials told us that DISA has not been able to validate all of its assumptions. Table 3 shows the cost estimates and supporting assumptions included in DOD’s multiple reviews for communications infrastructure associated with locating the JIAC at Lajes Field. Appendix II contains additional information on the requirements for the communications capabilities related to the JIAC. Two of DOD’s reviews provided cost estimates for the housing needed to support the JIAC at Lajes, but the estimates were based on different assumptions. Specifically, EUCOM estimated the one-time housing costs for locating the JIAC at Lajes Field at $390.5 million, while CAPE estimated these costs at $188 million. EUCOM’s review assumed that 1,031 new housing units would be needed on the base at Lajes Field, and CAPE’s review assumed that as few as 385 new units would be needed on base. Additionally, EUCOM’s estimate assumes that there would be a 252-person dorm unit shortfall for unaccompanied military personnel, and CAPE assumed a shortfall of 368 dorm units. Table 4 shows the two cost estimates for the housing and the assumptions that each review used. EUCOM and CAPE used different assumptions when developing their cost estimates for the number of housing units needed to support the JIAC at Lajes Field. Specifically, EUCOM reported that 1,812 housing units were needed to support the JIAC and that those units would include not only housing for the accompanied personnel working at the JIAC (around 1,200 personnel) but also housing units for the additional base operations and support personnel to support the JIAC (around 330 personnel) and personnel associated with the reversal of the personnel streamlining initiative at Lajes Field (around 751 personnel). EUCOM determined its estimate for accompanied housing units required for the JIAC at Lajes Field by using Air Force personnel standards and JIAC planning factors. On the other hand, CAPE’s estimates assumed that 1,260 accompanied housing units would be needed, that dorm units would be used, and unaccompanied civilians would live off base to minimize the effect on military family housing. A factor in the difference between EUCOM’s and CAPE’s housing cost estimates is the addition of 751 personnel (resulting in a need for 451 additional accompanied housing units) that EUCOM included in its cost estimate when the Lajes personnel streamlining initiative was reversed. CAPE officials told us that their estimate did not assume that reversing the personnel streamlining initiative would result in the addition of so many personnel, and that therefore there would be a reduced need for additional housing. However, CAPE assumed that more base operations and support personnel would be needed (CAPE assumed that 500 support personnel would be needed, while EUCOM assumed 330) to support the JIAC at Lajes Field. Both reviews also provided housing estimates for unaccompanied military personnel. EUCOM’s review assumed that 469 unaccompanied military personnel would reside in the existing 217 dorm spaces at Lajes Field, and there would be a shortfall of 252 dorm spaces (32 of those dorm units would be built using the JIAC military construction funds and the other 220 units represent the shortfall). CAPE’s review assumed that unaccompanied military personnel would reside in the existing 217 dorm units at Lajes Field and that DOD would build two additional dorms (for 168 and 200 personnel) to accommodate the unaccompanied military personnel. Additionally, EUCOM’s review assumed that 204 unaccompanied civilians would live on base and CAPE’s review assumed that 91 unaccompanied civilians would live off base in small family housing. EUCOM’s review used the military family housing inventory of the U.S. Air Forces in Europe to determine the number of housing units on the base (456 housing units), while CAPE’s estimate assumes that there were 550 housing units at Lajes Field. In July 2016, Air Force personnel at Lajes Field confirmed that there were 456 available housing units at Lajes Field. Also, both CAPE’s and EUCOM’s reviews used the January 2007 Housing Requirements Market Analysis for Lajes Field to determine the number of housing units that were available for rent off base (approximately 229 housing units). However, according to information provided by Terceira’s municipalities, there are currently 1,693 houses in the Island of Terceira available for rent, and almost 400 were recently occupied by U.S. military personnel and their families. According to the 2007 Housing Requirements and Market Analysis DOD conducted at Lajes Field, the Lajes rental market is separated into two areas—housing units that are specifically marketed to U.S. military personnel, have been inspected for suitability, and are listed in the Lajes Field housing rental database, and housing units that are part of the local rental market but not of sufficient quality and without the amenities required by U.S. military and civilian personnel. Further, EUCOM officials told us that the 1,693 housing units available on the Island of Terceira may not all be suitable for U.S. military forces. Air Force officials told us that there were only 225 rental properties in their off-base referral database. Both EUCOM’s and CAPE’s estimates assumed that there were 225 rental properties on the island and that another 100 would be built (for a total of 325) to support the personnel for the JIAC. Both reviews assumed that no additional housing units would be necessary at RAF Croughton, based on the 2016 Housing Requirements Market Analysis for RAF Croughton. According to Air Force officials, past housing and United Kingdom basing trends indicate that personnel associated with the JIAC would live off base in the private rental market, and the private rental market could sufficiently absorb the housing needs of the JIAC personnel. Further, EUCOM reported that the United Kingdom had the capacity to absorb the number of personnel associated with the JIAC move and that they would not need to build additional military family housing at RAF Croughton. The 2016 Housing Requirements Market Analysis for RAF Croughton reported that the private rental market was very active, that there was a total private rental stock of 69,364 rental units, and that the housing supply was projected to grow to 72,905 units by 2020. In addition to its multiple reviews, DOD issued a memorandum in March 2016 stating that the department had determined that RAF Croughton remained the optimal location for the JIAC and that Lajes Air Field is not an optimal location for the JIAC. Specifically, the Deputy Secretary of Defense issued a memorandum in response to several requirements in Section 2310 of the National Defense Authorization Act for Fiscal Year 2016; House Report 114-144 accompanying HR 2596, the Intelligence Authorization Act for Fiscal Year 2016; and Section 8114 of the DOD Appropriations Act for Fiscal Year 2016 (division C). The memorandum states that DOD’s decision was based on an analysis of U.S. operational requirements and an evaluation of multiple locations using five criteria: effect on intelligence operations (critical criterion); impact on bilateral and multinational intelligence collaboration (critical criterion); impact on international agreements and relationships; impact on community quality of life; and business case analysis. According to officials from the Office of the Secretary of Defense, DOD reviewed existing analysis and did not conduct new in-depth analysis to support the certification memorandum. The analysis DOD used to support the memorandum was based on the original analysis of alternatives process that DOD developed for the JIAC consolidation—which did not include Lajes Field as an alternative location—and on subsequent comparisons of Lajes Field and RAF Croughton. The officials stated that no additional in-depth analysis was warranted because no credible new evidence had been produced to indicate the department should revisit its initial decision. To address costly sustainment challenges and instances of degraded theater intelligence capabilities associated with the current JIAC facilities at RAF Molesworth, DOD plans to spend almost $240 million for the Air Force to consolidate and relocate the JIAC’s facilities at RAF Croughton. However, the Air Force’s cost estimate did not fully meet cost estimating best practices that are intended, when followed, to produce high-quality, reliable estimates. For example, the JIAC cost estimate included only MILCON costs and did not include costs associated with the life cycle of the project. Without fully accounting for life-cycle costs, management may have difficulty successfully planning and programming resource requirements for the JIAC consolidation and making sound decisions. Furthermore, the JIAC cost estimate lacked a sensitivity analysis, which would assess the underlying risks and supporting data. Without identifying the effects of uncertainties associated with different assumptions for the JIAC consolidation project, there is an increased risk that decisions will be made without a clear understanding of these effects on costs. Unless DOD uses best practices as it prepares future cost estimates for the remaining design and construction phases of the JIAC consolidation project, decision makers will not receive complete and reliable information on the total anticipated costs for the JIAC consolidation efforts for which they need to conduct oversight and make informed funding decisions. Furthermore, addressing limitations in future JIAC cost estimates can provide DOD better information to predict costs and make informed decisions about the JIAC consolidation. To better enable DOD to provide congressional decision makers with complete and reliable information on the total anticipated costs for the JIAC consolidation efforts, we recommend that the Office of the Assistant Secretary of Defense for Energy, Installations, and Environment’s Basing Office—in coordination with the Office of the Assistant Secretary of the Air Force Installations, Environment and Energy—update future construction cost estimates for consolidating the JIAC at RAF Croughton using best practices for cost estimating as identified in the GAO Cost Estimating and Assessment Guide. Specifically, cost estimates for the JIAC consolidation should fully incorporate all four characteristics of a high-quality, reliable estimate. We provided a draft of this report to DOD for review and comment. DOD provided written comments on our recommendation, which are reprinted in appendix III. The department also provided technical comments that we incorporated as appropriate. In its written comments, DOD did not concur with our recommendation. DOD agreed that many components in the GAO Cost Estimating and Assessment Guide are broadly applicable in the decision process leading up to a military construction budget request. However, DOD further stated that once military construction funds are authorized and appropriated by Congress, the department transitions to a project management mode, and it would be a waste of resources to continue to generate cost estimates once they have transitioned to managing project execution using actual cost data. However, as we note in the report, DOD guidance for estimating construction costs, DOD’s Unified Facilities Criteria 3-740- 05, states that in the MILCON program, construction cost estimates are prepared throughout the planning, design, and construction phases of a construction project to account for the refinement of the project’s design and requirements. The final estimate should document the department’s assessment of the program's most probable cost and ensure that enough funds are available to execute it. As of October 2016, the military construction funds had not been authorized by Congress for the third phase of the JIAC construction project. According to DOD officials, construction is not scheduled to begin until fall of 2017, and the contract has not yet been awarded. Further, the GAO Cost Estimating and Assessment Guide states that regardless of whether changes to the program result from a major contract modification or an overtarget budget, the cost estimate should be regularly updated to reflect all changes. This is also a requirement outlined in OMB’s Capital Programming Guide. The purpose of updating the cost estimate is to check its accuracy, defend the estimate over time, and archive cost and technical data for use in future estimates. After the internal agency and congressional budgets are prepared and submitted, it is imperative that cost estimators continue to monitor the program to determine whether the preliminary information and assumptions remain relevant and accurate. Keeping the estimate updated gives decision makers accurate information for assessing alternative decisions. Cost estimates must also be updated whenever requirements change, and the results should be reconciled and recorded against the old estimate baseline. Therefore, we continue to believe that DOD’s implementation of our recommendation to update future JIAC cost estimates using the best practices identified in the GAO Cost Estimating and Assessment Guide would assist in ensuring that decision makers have complete and reliable information about costs associated with the JIAC consolidation and as the third phase of the JIAC project is authorized. Implementing our recommendation would also ensure that DOD develops a reliable historical record for the cost of the JIAC that can be used to estimate other similar projects in the future. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this report. At that time, we will send copies of this report to the appropriate congressional committees and to the Secretaries of Defense, the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; and the Assistant Secretary of Defense for Energy, Installations, and Environment. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4523 or leporeb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. According to the GAO Cost Estimating and Assessment Guide, a cost estimate is a critical element in any acquisition process to help decision makers evaluate resource requirements at milestones and other important decision points. Cost estimates establish and defend budgets and drive affordability analysis. The guide identifies four characteristics of a high-quality, reliable cost estimate: it is comprehensive, well documented, accurate, and credible. A cost estimate is considered comprehensive when it accounts for all possible costs associated with a project, details all cost-influencing ground rules and assumptions, is technically reasonable, is structured in sufficient detail to ensure that costs are neither omitted nor double-counted, and the estimating teams’ composition is commensurate with the assignment; well documented when supporting documentation for the estimate is accompanied by a narrative explaining the process, sources, and methods used to create the estimate and contains the underlying data used to develop the estimate; accurate when the estimate is neither overly conservative nor too optimistic and is based on an assessment of the costs most likely to be incurred; and credible when the estimate has been cross-checked with independent cost estimates, the level of confidence associated with the point estimate—the best guess at the cost estimate given the underlying data—has been identified, and a sensitivity analysis has been conducted. During the sensitivity analysis, the project will have examined the effect of changing one assumption related to each project activity while holding all other variables constant in order to identify which variable most affects the cost estimate. Our analysis of the Air Force’s February 2015 cost estimate for the Joint Intelligence Analysis Complex (JIAC) showed that, when compared with best practices, it minimally met one and partially met three of the four characteristics of a reliable cost estimate (see table 1). According to the GAO Cost Estimating and Assessment Guide, a cost estimate is considered reliable if the overall assessment ratings for each of the four characteristics are substantially or fully met. If any of the characteristics is not met, minimally met, or partially met, then the cost estimate does not fully reflect the characteristics of a high-quality estimate and cannot be considered reliable. The May 2016 review by the Defense Information Systems Agency (DISA) discussed technical requirements and developed minimum standards for developing its cost estimates for communications infrastructure associated with locating the Joint Intelligence Analysis Complex (JIAC) at Lajes Field. The standards support the Department of Defense’s (DOD) position that locating the JIAC at Lajes Field would require the procurement and installation of three undersea cables. The DISA review stated that capabilities for global intelligence telecommunications at the JIAC must be secure, highly available, reliable, and redundant. The review also listed technical requirements based on these four characteristics, none of which—according to the review—the infrastructure at Lajes Field currently meets. One of these requirements is a critical infrastructure protection practice from the DOD Director of National Intelligence, which prohibits the use of communication paths that could result in denial of service or could compromise the integrity of information. The review characterizes DOD’s meeting this requirement at Lajes Field as a high risk, noting that non-DOD personnel from Huawei, a Chinese telecommunications company, could disconnect one fiber of the two-fiber ring at Lajes Field, which would eliminate the redundancy of the two cables and increase the risk that JIAC personnel would not be able to use the communications infrastructure to meet their operational requirements. The DISA review also listed a technical requirement that the communications infrastructure be able to operate at 56 gigabits per second, which the review noted is the minimum operational requirement for Non-classified Internet Protocol Router Network; Secret Internet Protocol Router Network; Joint Worldwide Intelligence Communications System; and voice, video, and data. According to the review, the current capabilities at Lajes Field do not meet this requirement. In comparison, the capacity at RAF Croughton allows for 800 gigabits per second, the capacity at U.S. Central Command also allows for 800 gigabits per second, and the capacity at U.S. Pacific Command allows for 100 gigabits per second. According to the review, DOD technical requirements also specify that the communications cables must be available at 99.999 percent or higher, which equates to just a few minutes of downtime per year. As indicated by DISA in its review, this level of availability requires sufficient redundancy. The capabilities at Lajes Field do not meet this requirement, according to DISA’s review. DISA officials provided us data on the number of average fiber cable outages per week in 2015—6.8 average outages per week for transatlantic cables and 4.4 average outages per week for Pacific cables. The frequency with which the cables experience outages highlights the need for redundancy in fiber cable routes. Without procuring and installing three undersea cables, Lajes Field would not have the availability, redundancy, capacity, and security necessary to house the JIAC. The September 2015 review by U.S. European Command (EUCOM) also references standards; however, it did not discuss these standards in detail. Its appendix on communications infrastructure, developed by DIA, says that the minimum threshold for fiber cables is two protected pathways to mainland Europe and one to the continental United States. Additionally—similar to the May 2016 DISA review—the EUCOM review indicated that three new undersea systems would have to be installed at Lajes Field to meet DOD requirements. DIA officials also told us that their assessment was based on DOD guidance and requirements, such as the Joint Intelligence Operations Center Enterprise Functional Requirements document and the Chairman of the Joint Chiefs of Staff Instruction 6211.02D Defense Information Systems Network (DISN) Responsibilities, (Jan. 24, 2012). The review by the Office of Cost Assessment and Program Evaluation (CAPE) did not discuss requirements or standards for the communications infrastructure, because it relied on DISA’s previous cost estimates. CAPE officials stated that they had deferred to DISA’s estimate, because DISA is the authoritative source for communications infrastructure design. In addition to the contact named above, Brian Mazanec (Assistant Director), Jennifer Andreone, Tracy Barnes, Jennifer Echard, Justin Fisher, Joanne Landesman, Jennifer Leotta, Amie Lesser, Jamilah Moon, Carol Petersen, and Sam Wilson made key contributions to this report.
DOD's JIAC, which provides critical intelligence support for the U.S. European and Africa Commands and U.S. allies, is currently located in what DOD has described as inadequate and inefficient facilities at RAF Molesworth in the United Kingdom. To address costly sustainment challenges and instances of degraded theater intelligence capabilities associated with the current JIAC facilities, the Air Force plans to spend almost $240 million to consolidate and relocate the JIAC at RAF Croughton in the United Kingdom. GAO was asked to review analysis associated with consolidating and relocating the JIAC. This report (1) assesses the extent to which DOD's cost estimate for the JIAC consolidation at RAF Croughton aligns with best practices and (2) describes key reviews DOD has conducted since spring of 2013 related to an alternative location for JIAC consolidation. GAO compared the Air Force's February 2015 JIAC cost estimate with GAO best practices for developing federal cost estimates, reviewed key DOD analysis of Lajes Field as a potential alternative location for the JIAC, and interviewed DOD officials. GAO assessed the cost estimate for the military construction project to consolidate and relocate the Joint Intelligence Analysis Complex (JIAC) at Royal Air Force (RAF) base Croughton and found that it partially met three and minimally met one of the four characteristics of a reliable cost estimate defined by GAO best practices, as shown in the table below. For example, it minimally met the credibility standard because it did not contain a sensitivity analysis; such analyses reveal how the cost estimate is affected by a change in a single assumption, without which the estimator will not fully understand which variable most affects the estimate. Unless the Department of Defense's (DOD) methodology incorporates all four characteristics of a high-quality, reliable estimate in preparing future cost estimates for the JIAC construction project, it will not be providing decision makers with reliable information. After DOD's 2013 decision to consolidate the JIAC at RAF Croughton, DOD organizations conducted multiple reviews in response to congressional interest in Lajes Field, Azores (Portugal) as a potential alternative location for the JIAC, including U.S. European Command (EUCOM) September 2015 review , a cost comparison and location analysis of RAF Croughton and Lajes Field; Office of the Secretary of Defense Cost Assessment and Program Evaluation April 2016 cost verification for JIAC , an independent review of EUCOM's September 2015 cost estimates and those developed by the House Permanent Select Committee on Intelligence in July 2015; and Defense Information Systems Agency May 2016 review on JIAC communications infrastructure requirements , an assessment and comparison of the communications infrastructures at Lajes Field and RAF Croughton with the intelligence mission support requirements, including the communications and technical requirements, for the JIAC. These reviews produced different cost estimates, in particular for housing and communications infrastructure, because the DOD organizations that developed them relied on different assumptions. DOD officials said that these reviews were not conducted with the same level of rigor as formal cost estimates, because DOD had concluded its analysis of alternatives and no credible new evidence had been produced to indicate the department should revisit its initial decision to consolidate the JIAC at RAF Croughton. GAO recommends that DOD update its future construction cost estimates for consolidating the JIAC at RAF Croughton to comply with best practices for cost estimating identified by GAO. DOD did not agree, stating it would waste resources to continue to generate cost estimates once DOD transitions to managing the project with actual cost data. GAO continues to believe that its recommendation is valid, as discussed in this report.
Since the 1960s, the United States has operated two separate polar- orbiting meteorological satellite systems: the Polar-orbiting Operational Environmental Satellite (POES) series, which is managed by NOAA, and the Defense Meteorological Satellite Program (DMSP), which is managed by the Air Force. These satellites obtain environmental data that are processed to provide graphical weather images and specialized weather products. These satellite data are also the predominant input to numerical weather prediction models, which are a primary tool for forecasting weather days in advance—including forecasting the path and intensity of hurricanes. The weather products and models are used to predict the potential impact of severe weather so that communities and emergency managers can help prevent and mitigate its effects. Polar satellites also provide data used to monitor environmental phenomena, such as ozone depletion and drought conditions, as well as data sets that are used by researchers for a variety of studies such as climate monitoring. Unlike geostationary satellites, which maintain a fixed position relative to the earth, polar-orbiting satellites constantly circle the earth in an almost north-south orbit, providing global coverage of conditions that affect the weather and climate. Each satellite makes about 14 orbits a day. As the earth rotates beneath it, each satellite views the entire earth’s surface twice a day. Currently, a NOAA/NASA satellite (called the Suomi National Polar-orbiting Partnership, or S-NPP) and two operational DMSP satellites are positioned so that they cross the equator in the early morning, mid-morning, and early afternoon. In addition, the government relies on a series of European satellites, called the Meteorological Operational (Metop) satellites, for satellite observations in the midmorning orbit. These polar-orbiting satellites are considered primary satellites for providing input to weather forecasting models. In addition to these primary satellites, NOAA, the Air Force, and a European weather satellite organization maintain older satellites that still collect some data and are available to provide limited backup to the operational satellites should they degrade or fail. Figure 1 illustrates the current operational polar satellite constellation. According to NOAA, 80 percent of the data assimilated into its National Weather Service numerical weather prediction models that are used to produce weather forecasts 3 days and beyond comes from polar-orbiting satellites. Specifically, a single afternoon polar satellite provides NOAA 45 percent of the global coverage it needs for its numerical weather models. NOAA obtains the rest of the polar satellite data it needs from other satellite programs, including the Department of Defense’s (DOD) early morning satellites and the European mid-morning satellite. Polar satellites gather a broad range of data that are transformed into a variety of products. Satellite sensors observe different bands of radiation wavelengths, called channels, which are used for remotely determining information about the earth’s atmosphere, land surface, oceans, and the space environment. When first received, satellite data are considered raw data. To make them usable, processing centers format the data so that they are time-sequenced and include earth-location and calibration information. After formatting, these data are called raw data records. The centers further process these raw data records into channel-specific data sets, called sensor data records and temperature data records. These data records are then used to derive weather and climate products called environmental data records. These environmental data records include a wide range of atmospheric products detailing cloud coverage, temperature, humidity, and ozone distribution; land surface products showing snow cover, vegetation, and land use; ocean products depicting sea surface temperatures, sea ice, and wave height; and characterizations of the space environment. Combinations of these data records (raw, sensor, temperature, and environmental data records) are also used to derive more sophisticated products, including outputs from numerical weather models and assessments of climate trends. Figure 2 is a simplified depiction of the various stages of satellite data processing. With the expectation that combining the POES and DMSP programs would reduce duplication and result in sizable cost savings, a May 1994 Presidential Decision Directive required NOAA and DOD to converge the two satellite programs into a single one capable of satisfying both civilian and military requirements: the National Polar-orbiting Operational Environmental Satellite System (NPOESS). NPOESS satellites were expected to replace the POES and DMSP satellites in the morning, mid- morning, and afternoon orbits when they neared the end of their expected life spans. To reduce the risk involved in developing new technologies and to maintain climate data continuity, the program planned to launch a demonstration satellite in May 2006. The first NPOESS satellite was to be available for launch in March 2008. However, in the years after the program was initiated, NPOESS encountered significant technical challenges in sensor development, program cost growth, and schedule delays. By March 2009, agency executives decided to use the planned demonstration satellite as an operational satellite because the schedule delays could have led to a gap in satellite data. Eventually, cost and schedule concerns led the White House’s Office of Science and Technology Policy to announce in February 2010 that NOAA and DOD would no longer jointly procure the NPOESS satellite system; instead, each agency would plan and acquire its own satellite system. Specifically, NOAA—with support from NASA— would be responsible for the afternoon orbit and DOD would be responsible for the early morning orbit. The agencies would rely on European satellites for the mid-morning orbit. When the decision to disband NPOESS was announced, NOAA and NASA immediately began planning for a new satellite program in the afternoon orbit called JPSS. Key plans included acquiring and launching two satellites for the afternoon orbit, called relying on NASA for system acquisition, engineering, and integration; completing, launching, and supporting S-NPP; developing and integrating five instruments on the two satellites; finding alternative host satellites for selected instruments that would not be accommodated on the JPSS satellites; and providing ground system support for JPSS (including S-NPP), and data communications for other missions, including the Metop satellite. NOAA organized the JPSS program into flight and ground projects that have separate areas of responsibility. Figure 3 depicts program components. The flight project includes a set of five instruments, the spacecraft, and launch services. Table 1 lists and describes the instruments. The ground project consists of ground-based systems that handle satellite communications and data processing. The JPSS program is working to implement a critical upgrade to the JPSS ground system that will allow it to support both the S-NPP and all planned JPSS satellites. The ground system’s versions are numbered; the version that is currently in use is called Block 1.2, and the new version that is under development is called Block 2.0. While Block 2.0 is planned to replace Block 1.2, a JPSS program official stated that there will be a period of overlap of about 60 days during which both versions are operational, and noted that Block 1.2 may stay online longer, if warranted, to address unanticipated problems on Block 2.0. In addition to multi-mission support, program officials stated that the new iteration of the ground system will also have a different set of security requirements that are designed specifically for the JPSS system, as opposed to the old requirements which were based on legacy needs. Officials also stated that the upgrade will include an enhanced architecture that is more scalable to future changes, and will allow NOAA to replace obsolete hardware and software. Since its inception, the composition and cost of the JPSS program have varied. In 2010, NOAA estimated that the life-cycle costs of the JPSS program would be approximately $11.9 billion for a program lasting through fiscal year 2024, which included $2.9 billion in NOAA funds spent on NPOESS through fiscal year 2010. Following this, the agency undertook a cost estimating exercise where it validated that the cost of the full set of JPSS functions from fiscal year 2012 through fiscal year 2028 would be $11.3 billion. After adding the agency’s sunk costs, which had increased to $3.3 billion through fiscal year 2011, the program’s life- cycle cost estimate totaled $14.6 billion. Subsequently, NOAA took steps to lower this estimate, since it was $2.7 billion higher than the original estimate for JPSS at the time that NPOESS was disbanded. In fiscal year 2013, NOAA officials agreed to cap the JPSS life-cycle cost at $12.9 billion, and to merge funding for two climate sensors into the JPSS budget. By October 2012, NOAA also decided to remove selected elements from the satellite program, such as the number of ground-based receptor stations (thus affecting the time it takes for products to reach end users) and the number of interface data processing segments. The administration then directed NOAA to begin implementing additional changes in the program’s scope and objectives in order to meet the agency’s highest-priority needs for weather forecasting and reduce estimated life-cycle costs from $12.9 billion to $11.3 billion. By April 2013, NOAA had decided to, among other things, cancel one of two planned free-flyer missions and transfer the remaining free-flyer mission to a new program within NOAA called the Solar Irradiance, Data, and Rescue mission. In addition, requirements for certain climate sensors were moved to NASA. As we reported previously, NOAA also reduced the estimated life-cycle cost of the program by eliminating the operational costs for the 3 years at the end of the JPSS mission; the current life-cycle cost estimate includes operational costs through 2025 even though the JPSS-2 satellite is expected to be operational until 2028. Table 2 compares the planned cost, schedule, and scope of the JPSS program at different points in time. Safeguarding federal computer systems and the systems supporting the nation’s infrastructures, including the nation’s weather observation and forecasting infrastructure, is essential to protecting national and economic security, and public health and safety. For government organizations, information security is also a key element in maintaining the public trust. Inadequately protected systems may be vulnerable to insider threats as well as the risk of intrusion by individuals or groups with malicious intent who could unlawfully access the systems to obtain sensitive information, disrupt operations, or launch attacks against other computer systems and networks. Moreover, cyber-based threats to federal information systems are evolving and growing. Accordingly, we designated information security as a government-wide high risk area in 1997 and it has remained on our high-risk list since then. Federal law and guidance specify requirements for protecting federal information and information systems. The Federal Information Security Management Act of 2002 and the Federal Information Security Modernization Act of 2014 (FISMA), which largely supersedes the 2002 act, require executive branch agencies to develop, document, and implement an agency-wide information security program to provide security for the information and information system that support operations and assets of the agency. The 2002 act also assigns certain responsibilities to the National Institute of Standards and Technology (NIST), which is tasked with developing, for systems other than national security systems, standards and guidelines that must include at a minimum, (1) standards to be used by all agencies to categorize all of their information and information systems based on the objectives of providing appropriate levels of information security, according to a range of risk levels; (2) guidelines recommending the types of information and information systems to be included in each category; and (3) minimum information security requirements for information and information system in each category. Accordingly, NIST developed a risk management framework of standards and guidelines for agencies to follow in developing information security programs. The framework addresses broad information-security and risk- management activities, including categorizing the system’s impact level; selecting, implementing, and assessing security controls; authorizing the system to operate (based on progress in remediating control weaknesses and an assessment of residual risk); and monitoring the efficacy of controls on an ongoing basis. Figure 4 shows an overview of this framework and table 3 describes the framework’s key activities and artifacts. In addition, appendix II describes relevant NIST publications. Federal agencies face an evolving array of information security-based threats which put federal systems and information at an increased risk of compromise. In September 2015, we reported that federal agencies showed weaknesses in several major categories of information system controls including access controls, which limit or detect access to computer resources, and configuration management controls, which are intended to prevent unauthorized changes to information system resources. Further, in November 2015, we reported that over the past 6 years we had made about 2,000 recommendations to improve information security programs and associated security controls. We noted that agencies had implemented about 58 percent of these recommendations. Since 2012, we issued three reports on the JPSS program that highlighted technical issues, component cost growth, management challenges, and key risks. In these reports, we made a total of 11 recommendations to NOAA to improve the management of the JPSS program. These recommendations included addressing key risks and establishing a comprehensive contingency plan consistent with best practices. The agency agreed with these 11 recommendations. As of December 2015, the agency had implemented 2 recommendations and was working to address the remaining 9. More specifically, in September 2013 and December 2014, we reported that while NOAA had taken steps to mitigate an anticipated gap in polar satellite data, it had not yet established a comprehensive contingency plan. For example, its plan did not fully identify risks to its contingency plans such as including recovery time objectives for key products, identifying opportunities for accelerating calibration and validation of products, and providing an assessment of available alternatives based on their costs and potential impacts. In addition, we found that NOAA had not prioritized these alternatives. We recommended that NOAA revise its plan to, among other things, identify recovery time objectives for key products, provide an assessment of alternatives based on costs and potential impacts, and establish a schedule with meaningful timelines and linkages among mitigation activities. We also recommended that NOAA investigate ways to prioritize mitigation projects with the greatest potential benefit in the event of a gap. NOAA agreed with these recommendations and stated it was taking steps to implement them. In December 2014, we also found that, while NOAA was providing oversight of its many gap mitigation projects and activities, the agency’s oversight efforts were not consistent or comprehensive. Specifically, only one of three responsible entities obtained monthly progress reports, and the three responsible agencies reported only on selected activities on a quarterly basis. We recommended that NOAA ensure that relevant entities provide monthly and quarterly updates of progress on all gap mitigation projects during existing meetings. NOAA agreed with this recommendation and stated it was taking steps to implement it. At that time, we also reported that NOAA had previously revised its estimate of how long a gap could last down to 3 months, but that this estimate was based on inconsistent and unproven assumptions and did not account for the risk that space debris poses to the S-NPP satellite’s life expectancy. We recommended that NOAA update the JPSS program’s assessment of potential polar satellite data gaps to include more accurate assumptions about launch dates and the length of the data calibration period, as well as key risks such as the potential effect of space debris. NOAA agreed with this recommendation and stated it was taking steps to implement it. Over the last year, the JPSS program has continued to make progress in developing the JPSS-1 satellite. In early 2015, the program completed two key instruments for the JPSS-1 satellite: the CrIS and VIIRS instruments. The program also completed its Systems Integration Review for the JPSS-1 satellite in February 2015. More recently, the program completed the ATMS instrument and integrated the instruments on the spacecraft. As of December 2015, the JPSS program reported that it remained on track to meet its planned launch date of March 2017 for the JPSS-1 satellite, and still expected the JPSS-2 satellite to launch no later than November 2021. However, the program has continued to experience delays in meeting interim milestones. In 2014, we reported that key components of the JPSS-1 satellite had experienced delays. Since that time, the program has continued to experience delays on key components ranging from 3 to 10 months. In particular, one component experienced almost 2 years of delay since July 2013. Table 4 provides details on specific key milestones. As of January 2016, the program continued to experience technical challenges that could cause additional schedule delays and potentially affect the scheduled launch of the JPSS-1 satellite. A delay in completing a key component on the spacecraft, called a gimbal, has in turn delayed the beginning of environmental testing. Since November 2014, program officials moved the component’s planned completion date from April 2015 to February 2016. The JPSS ground system also has experienced recent delays. The program experienced an unexpectedly high number of program trouble reports in completing an upgrade on the ground system. A key milestone related to this upgrade was recently delayed from January to August 2016. Program officials stated that delays such as these are normal and anticipated on complex and technical space system development efforts like JPSS, and that the program includes schedule reserves to address such challenges as they arise. As of January 2016, the program reported it had 24 days of margin remaining to its launch readiness date of December 2016, and another 3 months of margin between that date and the launch commitment date of March 2017. However, the margin of 24 days prior to the launch readiness date is less than the 1.9 months recommended by NASA’s development standards. This margin is also a decrease from the 6 months of margin the program had in July 2014. Given this narrowing of available schedule reserves, resolving the remaining technical issues (discussed later in this report) will be critical to achieving the planned launch date. The JPSS program’s baseline life-cycle cost estimate remains at $11.3 billion, but the cost of the flight segment has grown and the amount of reserve funds has decreased. Specifically, the cost of the flight segment grew by 8 percent from July 2013 to July 2014, and by another 2 percent in the period from July 2014 to December 2015. During those time frames, the cost of the ground system remained relatively steady; it dropped by 3 percent between 2013 and 2014 and then rose by 1.4 percent between 2014 and 2015. Over this 2-year period, NOAA’s estimate for the program’s development, maintenance, and operations has grown from almost $10.4 billion to just under $10.7 billion, meaning that the corresponding amount of reserve funds has decreased. The program currently has about $648 million in reserve funding for unanticipated issues over the life of the program. This is a 12.7-percent reduction in the amount of reserves between July 2014 and December 2015. Table 5 shows changes in cost estimates for JPSS program components between July 2014 and December 2015, as well as the overall percentage of change between July 2013 and December 2015. Within the flight segment, selected components have continued to experience higher cost growth. Since July 2014, the ATMS instrument’s cost increased by nearly 16 percent, while the OMPS instrument’s cost has grown nearly 10.4 percent (with a 7 percent increase between July and August 2015). In contrast, during the same time period, the VIIRS instrument’s cost decreased by 1.5 percent and the CrIS instrument’s cost decreased by 3.8 percent. NOAA officials stated that they are using information gained from the development of JPSS-1 instruments to aid in developing instruments for JPSS-2. Leveraging this information will be important in controlling costs on future satellites. Program officials stated that component cost increases such as these are normal and anticipated on complex and technical space system development efforts like JPSS. The program director explained that reserves were included in the life cycle cost estimate to address these cost increases and that the program is continuing to work within its approved life-cycle cost estimate. The JPSS program’s risk management guidance calls for identifying risks, developing action plans for addressing the risks, and reporting to management on key risks. These action plans are to include a list of steps to mitigate the risks and when those steps are to be completed. Since its inception in 2010, JPSS has identified and tracked key program risks. Moreover, the program office presents key risks during NOAA monthly program management council reviews. Over the last 2 years, NOAA has successfully closed four key risks. These risks include components that directly impact cost, schedule, and technical aspects of the program. More specifically, NOAA resolved risks involving a delay in the use of legacy polar data; a delay in completing problem change reports related to the current ground system; and issues stemming from the sale of a supplier of high-performance computing technology. However, as of November 2015, risks remained on both the flight and ground segment for JPSS that could potentially impact the planned completion of the spacecraft and ground system. JPSS-1 spacecraft component delivery: The program has experienced issues with development of the gimbal component, which as stated above, facilitates the transmission of data to the ground system and other satellites. The delivery date of the gimbal component continues to slip and has begun to impact remaining integration and test activities. The JPSS program office has taken steps to mitigate this risk by asking the prime spacecraft contractor to create a contingency plan on this issue, and delaying environmental testing until production is completed. However, the significant delays and rework involved have already caused critical milestone dates to slip up to five times. If this issue continues to consume program reserves, it may further delay NOAA’s ability to begin environmental testing on other areas of the spacecraft, thus delaying launch readiness. Ground segment issues: The program is facing several issues in developing and testing the next version of the ground system (Block 2.0), which could delay it from being operational when needed to support the JPSS-1 satellite. Specifically, a recent site acceptance test resulted in a higher-than-expected number of problem change requests in a new version of the ground system. These have not yet been resolved. The program is also experiencing challenges in testing the ground system’s requirements that may cause a delay in verifying some requirements until closer to launch. Program officials reported that they are developing a contingency plan to deal with the open change requests, and are re-planning the activities leading up to the completion of Block 2.0 in order to remove potential schedule conflicts between the ground and satellite testing schedules. Similar to its efforts to manage the program’s cost and schedule, the JPSS program office is actively monitoring these risks. Close management and monitoring of costs, schedules, and risks will be essential to ensuring a successful and timely launch. In accordance with FISMA and the NIST risk management framework, NOAA has established security policies and procedures governing its organizations and programs in each of the framework areas. The JPSS program implemented information security practices in the area of system categorization, and made progress in implementing information security practices in each of the other risk management areas. However, the program has yet to fully implement the best practices and policies established by the organization, and shortfalls exist in each of the remaining areas. For example, while the program has established plans of action to address control weaknesses, it has not addressed systemic critical issues in a timely manner. While required to remediate critical and high risk vulnerabilities within 30 days, as of August 2015, the program had over 1,400 critical and high risk vulnerabilities that were over 4 months old. As described earlier, FISMA requires federal agencies to develop, document, and implement an agency-wide information security program. It also calls for agencies to perform key activities to protect critical assets, in accordance with NIST’s risk management framework. This framework provides broad information security and risk management activities which guide the life-cycle processes to be followed in developing information systems: System Categorization: Programs are to categorize systems by identifying the types of information used, selecting a provisional impact level, modifying the rating based on mission-based factors, and assigning a category based on the highest level of impact to confidentiality, integrity, and availability. Programs select the initial impact levels using an assessment of threat events and their impact to operations. Selection and Implementation of Security Controls. Programs are to determine protective measures, or security controls, to be implemented based on the system categorization results. These security controls are documented in a System Security Plan. Key controls include access controls; incident response; security assessment and authorization; identification and authentication; and configuration management. Once controls are identified, programs are to determine implementation actions for each of the designated controls. These implementation actions are also specified in the System Security Plan. Assessment of Security Controls. Programs are to develop a test plan that will determine which controls to test (called a Security Controls Assessment), prioritize and schedule assessments, select and customize techniques, and develop risk mitigation techniques to address weaknesses. In addition to testing controls, test plans may also include penetration testing, which involves simulating attacks to identify methods for circumventing the security features of an application, system, or network, and using tools or techniques commonly used by attackers. Authorization to Operate (ATO): Programs are to obtain security authorization approval in order to operate. Resolving weaknesses and vulnerabilities identified during testing is an important step leading up to achieving ATO. Programs are to establish plans of action and milestones (POA&M) to plan, implement, and document remedial actions to address any deficiencies in information security policies, procedures, and practices using POA&Ms. Monitoring of Security Controls: Agencies are to monitor their security controls on an ongoing basis after deployment, including assessing controls’ effectiveness and reporting on the security state of the system. A key part of ongoing monitoring is handling incidents. NIST guidance specifies procedures for implementing FISMA incident-handling requirements, and includes guidelines on establishing an effective incident response program and detecting, analyzing, prioritizing, and handling an incident. In accordance with NOAA policy, the JPSS program implemented key elements of the NIST framework regarding system categorization and identified the ground system as a high-impact system. A high-impact system is one where the loss of confidentiality, integrity, or availability could be expected to have a severe or catastrophic adverse effect on organizational operations, organizational assets, or individuals. Steps leading to this categorization included the following: The JPSS program identified several information types relevant to the JPSS mission, including space operations, environmental monitoring and forecasting, contingency planning, and continuity of operations. For each information type, JPSS program officials identified security levels in the areas of confidentiality, availability, and integrity, based on the nature of its mission. Program officials chose these levels based on a detailed risk assessment, which allowed them to determine the extent to which threats could adversely impact the organization and the extent to which agency systems are vulnerable to these circumstances or events. The program assigned an overall high-impact security level for its ground system, based on the highest impact level for each of the component information types. In accordance with NOAA policy and NIST guidance, the JPSS program established a System Security Plan for its ground system that identifies the key security controls it plans to implement based on its system security categorization and impact analyses. Key control areas include access controls, risk assessment, incident response, identification and authentication, and configuration management. However, the program determined that the JPSS ground system is at a high risk of compromise to its confidentiality, integrity, and availability due to the significant number of controls that were not fully implemented. According to program documentation, as of June 2015, the JPSS program had fully implemented 53 percent of the baseline system security controls, and partially implemented the remaining controls. Moreover, out of 17 control areas, the JPSS program had fully implemented all of the controls for only one area: incident response. The program has not fully implemented security controls for the remaining 16 control areas. The areas with the most partially implemented controls were physical protection, access control, audit and accountability, and configuration management. Program officials explained that there are so many partially implemented controls because the current ground system (Block 1.2) was built under the predecessor NPOESS program to DOD moderate security standards. When NPOESS was disbanded and NOAA initiated the JPSS program in 2010, the program took over development of the S-NPP satellite and ground system. Program officials noted that NOAA’s early priorities were to transition the DOD contracts to NOAA and to establish the JPSS program office, and that they were not able to begin planning to upgrade the ground system until 2012. NOAA acknowledged that they need to increase the security of the ground system and noted that they have been working to do that. Program officials stated that they implemented compensating controls to mitigate the risks inherent in the Block 1.2 system. These compensating controls include increased logging and monitoring of traffic to identify anomalies, segmentation of the environment, and increased staffing on remediating and patching weaknesses. In addition, program officials stated that they plan to implement the remaining controls when the program upgrades the ground system from Block 1.2 to Block 2.0 in August 2016. In accordance with NOAA policy and the NIST framework, the JPSS program developed a plan for assessing its security controls, customized its testing approach to the ground system, and implemented the assessment. Specifically, in 2015, a contractor working for NOAA’s National Environmental Satellite, Data, and Information Service (NESDIS) tested how the program implemented the controls identified in the System Security Plan and identified weaknesses in the required controls established by the program. The results of this test, called a Security Controls Assessment, were documented in a subsequent report. The Security Controls Assessment also included results of an annual penetration test that was conducted by a private sector company in May 2015 to verify the effectiveness of security controls. The June 2015 Security Controls Assessment identified a large number of critical and high risk vulnerabilities, and these numbers have been growing over time. Specifically, the assessment identified 146 critical and 951 high risk vulnerabilities on Block 1.2 of the ground system, as well as 102 critical and 295 high risk vulnerabilities on Block 2.0 of the ground system. Figure 5 shows the number of open vulnerabilities on the Block 1.2 system, by severity, from the third quarter of 2014 to the second quarter of 2015. The program is currently working to address the vulnerabilities through the creation of plans of action to remediate them, as discussed in the following section. However, the program’s assessment of its security controls had significant limitations. Specifically, the assessment team reported that it did not have all of the information it needed to plan or test the entire system and its artifacts. In establishing procedures for the assessment, the assessment team noted concerns regarding uncertainty about the physical locations for JPSS components, inconsistencies in system inventory management, and communication and information availability between different groups within JPSS, including contractors. Also, in implementing the assessment, the team encountered a discrepancy between the security scans and the asset inventory being assessed. These shortcomings were noted again in a later security scan, which according to the program office, showed a struggle with understanding the rules of security scans, using the assessment tool, and maintaining a valid inventory. According to NESDIS officials, while the assessment team had the information it needed when it initiated its review, the program continued to develop and revise the system. Thus, the inventory of system components that was assessed did not match the evolving system. Moreover, NOAA officials stated that the assessment attempted to account for the limitations by factoring a high likelihood and high impact of an unknown risk into the system’s overall risk score. These limitations increase the risk that devices in place on the current JPSS network have not been identified or tested. As a result of these testing limitations, the Security Controls Assessment may not have identified all of the system’s specific control weaknesses. Consistent with FISMA requirements and NIST guidance, NOAA has a process for authorizing its systems to operate. In order to achieve ATO, NOAA requires its programs to establish plans of actions and milestones (POA&M) to address control weaknesses, make satisfactory progress in completing POA&Ms, and resolve at least 80 percent of the POA&Ms on or before their due dates. NOAA also follows a Department of Commerce policy which requires it to remediate all vulnerabilities deemed critical or high risk within 30 days of discovery. The Commerce policy notes that vulnerabilities that are not remediated within 30 days must be managed through the POA&M process or accepted with written justification by the authorizing official. NOAA’s POA&M policy requires mitigation of critical and high risk vulnerabilities within 30 days, which NOAA officials explained that they interpreted as requiring mitigation within 30 days of establishment of a POA&M. In addition, the Commerce policy calls for the authorizing official to officially accept the risk if the vulnerability cannot be remediated within the required timeframe. The JPSS program has implemented the ATO process on both its current system (Block 1.2) and planned system upgrade (Block 2.0) in July 2015, and plans to obtain another ATO for both blocks by July 2016. The authorizing officials for the JPSS ground system are the Deputy Assistant Administrator at NESDIS and the NOAA Chief Information Officer. To obtain its ATO, the JPSS program made progress in addressing many of its security weaknesses through POA&Ms. Specifically, the program assigned a level of criticality to each POA&M, and tracks and reports the status of all POA&Ms at the monthly Program Management Council meetings. The JPSS program office drafted POA&Ms for deficiencies in both the existing ground system (Block 1.2) and its planned ground system upgrade (Block 2.0). Also, the program office plans to remediate all critical and high risk vulnerabilities before going live with Block 2.0 in August 2016. However, the program has not complied with the Department of Commerce policy for remediating critical and high risk vulnerabilities within 30 days or with NOAA’s policy for remediating such POA&Ms within 30 days. After a security scan conducted in March 2015 identified over 1,000 critical and high risk vulnerabilities on Block 1.2 and almost 400 critical and high risk vulnerabilities on Block 2.0, the program established POA&Ms to address these vulnerabilities. These vulnerabilities included use of outdated software, an obsolete web server, and older virus definitions. At the time the POA&Ms were established in August 2015, the 1,400 vulnerabilities were already over 4 months old. The JPSS program set completion dates for the POA&Ms of August 2016 for Block 2.0 and January 2017 for Block 1.2. These anticipated completion dates are 17 and 22 months later than required by Commerce and NOAA policies. In addition to the POA&Ms resulting from the Security Controls Assessment, the JPSS program does not plan to address other POA&Ms in a timely manner. The program consistently establishes due dates for its POA&Ms that are 1 to 3 years in the future. This is illustrated by the following examples: NOAA created a POA&M for upgrading its operating systems to supportable platforms and applying all recommended patches to the system to improve security posture and reduce its risk. The issues associated with the unsupportable platforms are scheduled for completion in 2016, 3 years after the POA&M was opened. NOAA created a POA&M in 2013 to improve configuration settings for its antivirus software. This fix is also estimated to occur in late 2016, 3 years after the issue was identified. In 2013, NOAA created a POA&M to protect the integrity of data transmissions. This POA&M would ensure that the system monitors for unauthorized access to the system and enforces authorization requirements. NOAA plans to fully mitigate this weakness in late 2016. The extended time it takes the JPSS program to resolve vulnerabilities is a longstanding issue. In August 2014, the Department’s Inspector General reported that it took the program 11 to 14 months to remediate high risk vulnerabilities identified between 2011 and 2013. The Inspector General noted that this slow rate of remediation was not sufficient to keep up with the rapid growth in the number of vulnerabilities. Program officials also noted that it is often not possible to remediate critical and high risk vulnerabilities within 30 days because patches may not be available for selected components, testing may take longer than 30 days, and certain changes need to be coordinated with mission partners. Program officials also stated that they plan to modify their internal procedures associated with the Federal Information Processing Standard 200 security control baseline analysis document to allow longer timelines when 30 days is not feasible. Further, in commenting on a draft of this report, NOAA officials stated that the program decided to delay the due date for certain POA&Ms on Block 1.2 that would require significant changes in architecture to coincide with the delivery of Block 2.0. While the 30 days called for in Commerce and NOAA policies may be challenging, NOAA’s ground system has been operating for years with known vulnerabilities due to the backlog of unresolved POA&Ms. These vulnerabilities threaten the confidentiality, integrity, and availability of the ground system that supports S-NPP operations. Until the program remediates these vulnerabilities and addressed POA&Ms in a timely manner, the JPSS program remains at increased risk of potential exploits. In accordance with NOAA policy, the JPSS program established a continuous monitoring plan to ensure information security controls are working. Consistent with the plan, the program conducts regular security control and vulnerability assessments, monitors the status of remedial actions, and briefs management on a monthly basis on security status. The JPSS program also monitors potential security control weaknesses by tracking incidents and intrusions, on which it reports to a NOAA-wide incident response team. Like other federal agencies, NOAA has experienced several recent information security incidents regarding unauthorized access to web servers and computers. Specifically, NOAA officials reported 10 medium and high severity incidents related to the JPSS ground system between August 2014 and August 2015. Of these, NOAA has closed 6 of the 10 incidents. The incidents that were closed involved hostile probes, improper usage, unauthorized access, password sharing, and other IT- related security concerns. According to NOAA officials, the JPSS program office and the NOAA incident response team track all information security incidents. However, inconsistencies exist in the status of incidents being tracked. Specifically, there are differences between what is being tracked by the JPSS program office and what is closed by NOAA’s incident response team. Two of the four incidents that were recommended for closure by the JPSS program office are currently still open according to the incident report. JPSS program officials explained that they can only recommend the closure of an incident and the NOAA incident response team is ultimately responsible for closing an incident based on the information that was provided. Thus, the inconsistency in the status of incidents should be resolved when NOAA updates its tracking tool. Until NOAA and the JPSS program have a consistent understanding of the status of incidents, there is an increased risk that key vulnerabilities will not be identified or properly addressed. Over the last year, NOAA made progress in assessing the potential for a satellite gap, improved its satellite gap mitigation plan, and completed multiple mitigation activities; however, key shortfalls remain on these efforts. To ensure that satellites are available when needed, satellite experts consider performing annual assessments of a satellite’s health and future availability to be a best practice. The JPSS satellite program completed such assessments in 2013, 2014, and 2015 and determined that a near-term gap in satellite data is unlikely, but there are weaknesses in NOAA’s analysis. Further, government and industry best practices call for the development of contingency plans to maintain an organization’s essential functions in the case of an adverse event such as a gap in critical satellite-based data. NOAA has developed such plans and has improved them over the last few years; however, shortcomings remain in its current plan. In addition, NOAA is in the process of implementing the activities it identified in the plan. At the conclusion of our review, program officials provided an update on the status of key mitigation activities, and noted that they plan to continue to work to improve its gap mitigation plan in 2016. We previously reported that NOAA was facing a potential near-term gap in polar data between the expected end of useful life of the S-NPP satellite and the beginning of operation of the JPSS-1 satellite. As of October 2013, NOAA officials stated that a 3-month gap was likely based on an analysis of the availability and robustness of the polar constellation. In April 2015, NOAA revised its assumption of how long S-NPP will last by adding up to 4 years to its expected useful life. Under this new scenario, NOAA would not anticipate experiencing a near-term gap in satellite data because S-NPP would last longer than the expected start of operations for JPSS-1. Currently, JPSS-1 is expected to be launched in March 2017 with a 3-month on-orbit check out period (through June 2017) and JPSS-2 is expected to launch in November 2021. Figure 6 shows the latest estimate of the expected lives of NOAA’s polar satellites. While the outlook regarding the length of a potential gap has improved, there are several reasons why a potential gap could still occur and last longer than NOAA anticipates. For instance, the S-NPP satellite could fail sooner than expected, or the JPSS-1 satellite could either encounter delays during its remaining development and testing, or fail upon launch or in early operations. Under these scenarios, a gap is still possible, and could last for up to 5 years in the event of a launch failure. If the JPSS-2 satellite were to be delayed or encounter problems as well, a gap could be even longer. Space and satellite experts consider performing annual assessments of a satellite’s health and future availability to be a best practice. For example, the Department of Defense (DOD) requires annual assessments of the health of its satellite assets as part of its budget preparations. The assessments show, among other things, the probability that a specific satellite or instrument will be available for use at a given time in the future. While this assessment is not required under NOAA policy, in 2013 the JPSS program began performing an annual analysis of the expected availability of satellites in the polar constellation. The program did this to get regular updates on the health of individual satellites and to help plan future satellite programs and launch dates. According to program officials, NOAA uses these analyses to support their strategies on gap mitigation. Among other things, the analyses show the likely availability of each satellite and instrument over time, scenarios showing the effects on availability given impact from space debris and a life limiting factor on the ATMS instrument, and scenarios for overall polar constellation availability. See appendix III for more information on what the availability analysis shows for the current polar satellite constellation. In December 2014, we reported that NOAA’s 2013 assessment of satellite availability had several limitations, including inconsistent launch date plans, unproven assumptions about on-orbit checkout and validation, and exclusion of the risk of a potential failure due to space debris. Agency officials acknowledged the assessment’s limitations and completed updated assessments in December 2014 and November 2015. NOAA made specific improvements in its 2014 assessment. Specifically, NOAA improved the underlying analysis of S-NPP quality through additional analysis of the existing life and health of the S-NPP satellite bus, using data through mid-2014; showed both individual instrument and overall satellite availability over time for the S-NPP and JPSS satellites; showed overall availability over time of all key performance parameter instruments (regardless of satellite), and for the constellation’s robustness criteria; and showed several availability scenarios depicting what would happen in the event of a loss of the JPSS-1 satellite. In addition, the November 2015 assessment made further improvements by including key factors that could have an effect on S-NPP’s useful life in its analysis. Specifically, the newer assessment includes actual instrument performance through mid-2015, assumptions about the risk of space debris, and information on the health of S-NPP’s batteries. These enhancements help to better conceptualize decisions NOAA will need to make in planning and launching future satellites. However, weaknesses remain in the latest assessment, which decrease NOAA’s assurance that its satellite life estimates are reliable. Specifically: NOAA assumes that JPSS-1 data from key instruments will be available to satellite data users for operational use 3 months after launch, which is far less time than it took to calibrate and validate these instruments for operational use on S-NPP. While initial satellites in a series are more difficult to calibrate and validate than subsequent ones and some unvalidated data may be available earlier, this estimate (which is 2 to 3 times faster than was experienced on S- NPP) appears to be overly optimistic. This may mean that the JPSS-1 satellite takes longer to become operational than NOAA is planning. NOAA’s analysis of the degrading health of the S-NPP satellite is not consistent with the estimated life dates from its April 2015 flyout chart (as shown in figure 6). Specifically, the flyout chart shows S-NPP with an extended useful life through late 2020, while the assessment shows that there is only a 50 percent likelihood that S-NPP will be fully functioning in 2020. JPSS program officials stated that they plan to perform another assessment in 2016. Until it has a strong assurance of how long the JPSS satellites are likely to last using an assessment that includes assumptions that are more consistent with past experiences, NOAA risks not adequately planning for mitigating a potential loss, or not communicating to its various stakeholders when its satellites are likely to fail. Government and industry best practices call for the development of contingency plans to maintain an organization’s essential functions in the case of an adverse event. A summary of guidelines for developing a sound contingency plan are identified in table 6 below. In October 2012, NOAA developed a contingency plan (which it refers to as its gap mitigation plan), which was subsequently updated in 2014 and in April 2015. In 2013, we reviewed NOAA’s original contingency plan and reported that it had shortcomings in nine areas, including that the agency had not selected strategies from its plan to be implemented or developed procedures and actions to implement the selected strategies. We made a recommendation to establish a more comprehensive contingency plan for potential satellite data gaps which included these and other elements. NOAA agreed with our recommendation and worked to implement it. In 2014, we reviewed a revised plan and evaluated NOAA’s progress against the weaknesses we had previously identified. We reported that it had completed two of the nine areas, made partial progress in five areas, and made no progress in two areas. In its most recent contingency plan, NOAA fully addressed two of the remaining seven issues, conducted work in four areas, and had not addressed the remaining issue. See table 7 for details on the seven areas that were not fully addressed during our prior reviews. In summary, NOAA made progress by listing the contingency strategies it selected to be implemented and has integrated strategies identified after the 2014 plan was developed. It also detailed plans to make JPSS-1 data available as soon as possible after launch. However, NOAA has not yet documented the JPSS program’s required recovery time and has not developed an integrated master schedule for gap mitigation activities. The program updated the status of ongoing and planned mitigation activities in early 2016, and plans to issue an updated contingency plan later in 2016. NOAA identified 35 gap mitigation activities and is making progress in implementing them. These activities fall into three general categories: (1) understanding the probability and the impact of a gap, (2) reducing the likelihood of a gap, and (3) reducing the impact of a gap. As of January 2016, 16 activities had been completed, including transitioning the S-NPP satellite from a research satellite to a fully operational satellite. Another 18 activities are ongoing, including assimilating more observations from commercial aircraft observations and unmanned aerial systems into weather models, and leveraging data and models from the European Center for Medium-range Weather Forecasts into National Weather Service weather models. One other activity is planned for the future. See table 8 below for details on these activities. While these gap mitigation activities are important to help mitigate the impact of a satellite data gap, NOAA acknowledges that no mitigation activities can fully replace polar-orbiting satellite observations. NOAA has begun planning for new satellites to ensure the future continuity of polar satellite data. This program is called the Polar Follow- On (PFO). According to NOAA officials, PFO will allow for polar satellite coverage in the afternoon orbit into the 2030s. NOAA plans to eventually manage PFO as an integrated program with the current JPSS program. The PFO budget includes operational costs for both it and the current JPSS program after fiscal year 2025. NOAA officials have stated that part of its goal for the future satellite program is to provide “robustness” in order to minimize the chance of a data gap like the near-term one the agency is facing. According to NOAA documentation, the main objectives of the PFO program are to (1) have the earliest possible launch readiness for the JPSS-3 and JPSS-4 satellites in order to achieve robustness, and (2) to minimize costs. As recommended by a 2013 Independent Review Team, NOAA would achieve robustness on its polar satellite program when (1) it would take two failures to create a gap in data for key instruments, and (2) the agency would be able to restore the system to a two-failure condition within 1 year of a failure. This means that NOAA would need a backup satellite in orbit to provide data in the event of one failure, and that the agency would have the ability to launch another satellite within a year to replace an on-orbit need. Achieving robustness would greatly minimize the chances of a single point of failure—that is, a problem with one satellite causing an immediate loss of data. NOAA has identified the satellites it plans to build as a part of PFO. The PFO program is planned to include two more satellites in the JPSS series, called JPSS-3 and JPSS-4. NOAA plans for these satellites to be nearly identical to the JPSS-2 satellite. Each satellite will include the three instruments that are considered to be key performance parameters: the Advanced Technology Microwave Sounder (ATMS), the Cross-Track Infrared Sounder (CrIS), and the Visible Infrared Imaging Radiometer Suite (VIIRS). The satellites will also include the Ozone Mapping and Profiler Suite-Nadir (OMPS-N). These four instruments are environmental sensors that provide critical data used in numerical weather prediction and imagery. NOAA also is planning for two climate instruments that are on JPSS-2— the Ozone Mapping and Profiler Suite-Limb (OMPS-L) and the Radiation Budget Instrument—to be hosted on JPSS-3 and JPSS-4 as well. However, according to NOAA, these instruments are not essential and their funding from JPSS-2 onward is uncertain. In addition to the JPSS-3 and JPSS-4 satellites, PFO is planned to include a Cubesat satellite. Specifically, NOAA plans to fly a satellite called the Earth Observing Nanosatellite–Microwave. This satellite, due to launch in 2020, would be able to replace some, but not all, ATMS data in the event of a gap between JPSS-1 and JPSS-2. Program officials have stated that, because of its low cost and the experience the agency will gain from the mission, NOAA will launch the Earth Observing Nanosatellite–Microwave regardless of the status of the remainder of the constellation. Figure 7 shows the planned expected lives for all of the JPSS and PFO satellites. NOAA has taken several steps in planning the PFO program. Specifically, it established goal launch dates, high-level annual budget estimates, and roles and responsibilities for NOAA offices that will play a role on the new program. However, NOAA is in the process of updating key formulation documents for PFO, such as high-level requirements, an updated concept of operations and project plan, and budget information for key components. Program officials stated that they expect to complete key documents by mid-2016. NOAA plans to develop the PFO satellites well before they are needed. In general, the agency makes a distinction between the date it wants to have a satellite available for launch (called a launch readiness date) and the actual planned launch date. NOAA set the launch readiness dates for the JPSS-3 and JPSS-4 satellites as January 2024 and April 2026, respectively. NOAA also has a contingency plan to launch the JPSS-3 satellite with only the two most important instruments (ATMS and CrIS) as early as 2023, if it is needed to mitigate a near-term satellite data gap due to unanticipated problems with JPSS-1 or JPSS-2. In contrast, NOAA’s planned launch dates for JPSS-3 and JPSS-4 are 2 and 5 years later, respectively. NOAA currently plans, beginning with JPSS-2, to launch a new satellite every 5 years in order to achieve a robust constellation of satellites. Specifically, planned launch dates for the JPSS-3 and JPSS-4 satellites are July 2026 and July 2031, respectively (see figure 7). NOAA has given several reasons for planning to achieve launch readiness several years ahead of launch. According to NOAA officials, this difference between planned launch readiness and actual launch dates, called the “build-ahead” strategy, is part of an effort to achieve the two robustness criteria as quickly as possible. NOAA officials also stated that early readiness would allow a “robust sparing strategy” for ATMS and CrIS. According to NOAA, this would allow for completed components from the JPSS-3 and JPSS-4 satellites to be substituted as needed if parts failed during integration and test of an earlier satellite. Additionally, according to NOAA, experienced contractor staff needed to complete development efficiently for the PFO satellites are in place now. Such staff may not be available if there is an extended break in development time. However, uncertainties remain on whether it is necessary to develop both JPSS-3 and JPSS-4 early in order to achieve robustness. For example, while NOAA flyout charts for the polar constellation list the JPSS satellites starting with JPSS-1 as lasting only 7 years, program officials have stated that they could last as long as 10 or 11 years. In addition, NOAA recently updated the flyout chart to show that S-NPP could last as long as 9 years, based on past performance. If the satellites last longer than expected, then there could be unnecessary redundancy. For example, at the extended useful life estimate of 10 to 11 years, JPSS-1, JPSS-2, and JPSS-3 would still be available in 2027 when JPSS-4 completes development. If NOAA were to delay launching JPSS-4 until it is needed, the satellite could be in storage for 4 years. Figure 8 shows anticipated satellite lifetimes with extended useful lives. Alternatively, if the early satellites do not last longer than expected, then there is an increased potential for future gaps in polar satellite coverage, as there will be several periods in which only one satellite is on orbit. Due to this uncertainty, NOAA faces important decisions on timing the development and launch of the remaining satellites in the JPSS program. NOAA requires cost/benefit studies for major programs to assist in making major decisions. However, the program did not evaluate the costs and benefits of launch scenarios based on the latest estimates of how long the satellites would last. Such an analysis is needed to ensure robust coverage while minimizing program costs, and could help determine the most cost-effective launch schedule. For example, if JPSS-4 development could be deferred, the annual cost of PFO might be decreased. A potential cost decrease is important because, according to NOAA documentation, the overall funding need for PFO is expected to be about $8.2 billion, compared to about $11.3 billion for the full JPSS program through 2025. Until NOAA ensures that its plans for future polar satellite development are based on the full range of estimated lives of potential satellites, the agency may not be making the most efficient use of the nation’s sizable investment in the polar satellite program. Facing a potential gap in weather satellite data, NOAA has made progress in developing the JPSS-1 satellite and is on track to launch it in March 2017. However, the agency continues to experience cost growth, schedule delays, and technical risks on key components. In particular, a component on the spacecraft has fallen more than 6 months behind schedule, putting the spacecraft on the critical path leading up to the planned launch date. Continued close management of costs, schedules, and risks will be essential to ensuring a successful and timely launch. Given the increasing information security risks across the federal government, building information security into ground systems is a critical component of the JPSS system development. Although the JPSS program has assessed key risks, established and evaluated security controls, and remediated selected control weaknesses, key deficiencies remain. Specifically, the team responsible for testing security controls did not have all the information it needed to test the entire system. Also, while the assessment found numerous vulnerabilities, the program has not addressed them in a timely manner. These security shortfalls put the program at risk of being compromised, and there have been a number of security incidents affecting the ground system in recent years. While NOAA’s incident response group has effectively addressed security incidents, there are discrepancies between NOAA and the JPSS program on the status of incidents. Such discrepancies make it more difficult to ensure that all incidents are identified, addressed, and tracked to closure. Until these deficiencies are addressed, the polar satellite infrastructure will continue to be at increased risk of compromise. To address the risk of a near-term satellite gap and to move to a more robust constellation of polar satellites, NOAA has assessed the health of its operational satellites annually, established and improved its gap mitigation plans, and is beginning to plan a new satellite program to ensure coverage through 2038. While the JPSS program improved its satellite assessment and gap mitigation plans, shortfalls remain, including identifying recovery time objectives for key data products. In prior reports, we have made recommendations to NOAA to improve its satellite availability assessment and its gap mitigation plans. We continue to believe that these recommendations are valid and, if fully implemented, would improve NOAA’s ability to assess and manage the risk of a gap in satellite data. We will continue to monitor NOAA’s ongoing efforts to address our prior recommendations. While NOAA is planning a follow-on polar satellite program to better ensure polar satellite coverage in the future, the agency has not evaluated the costs and benefits of different launch scenarios based on its updated understanding of how long its satellites might last, and uncertainties remain in determining appropriate dates for the development and launch of the satellites. Unless NOAA makes launch decisions based on the most current estimates of useful life of its satellites, the agency may not make the most effective and economical use of the nation’s sizable investment in polar satellites. Given the importance of addressing risks on the JPSS satellite program, we are making the following four recommendations to the Secretary of Commerce. Specifically, we recommend that the Secretary direct the Administrator of NOAA to take the following actions: Establish a plan to address the limitations in the program’s efforts to test security controls, including ensuring that any changes in the system’s inventory do not materially affect test results. When establishing plans of action and milestones to address critical and high risk vulnerabilities, schedule the completion dates within 30 days, as required by agency policy. Ensure that the agency and program are tracking and closing a consistent set of incident response activities. Evaluate the costs and benefits of different launch scenarios for the PFO program based on updated satellite life expectancies to ensure satellite continuity while minimizing program costs. We sought comments on a draft of our report from the Department of Commerce and NASA. We received written comments from the Department of Commerce transmitting NOAA’s comments, which are reprinted in appendix IV. NOAA concurred with all four of our recommendations and identified steps it is taking to implement them. In its comments, NOAA wrote that it recognizes the need to close polar data gaps and to keep pace with changes in information security requirements; however, it noted that resource constraints and shifting priorities have presented challenges in meeting these objectives. In response to our second recommendation, to schedule completion dates for plans of actions and milestones (POA&M) to address critical and high-risk vulnerabilities within 30 days as required by agency policy, NOAA concurred and noted that JPSS would continue to follow agency policy. NOAA explained that agency policy allows the authorizing official to accept and document risks when remediation of vulnerabilities cannot be performed as anticipated. It further noted that there are two situations which may result in remediation taking longer than the policy requires: (1) when applying patches to a system that must remain static while in development and testing, and (2) when applying patches to a complex operational system that requires analysis and testing prior to deployment in order to protect the availability of the system. While we acknowledge that there are valid reasons that remediating a POA&M might take longer than the 30 days required by agency policy, the JPSS program did not follow agency policy in that it did not schedule completion of key POA&Ms within 30 days and did not have documentation from the authorizing official accepting the risk of a delayed remediation schedule for critical and high-risk vulnerabilities, as we note in this report. Moving forward, NOAA noted that it plans to update its FIPS 200 compliance document to include steps to obtain and document risk acceptance from the authorizing official. We agree that updating this plan and implementing it will help ensure that the program is better aligned with agency policy and in a better position to remediate or accept vulnerabilities. In response to our fourth recommendation, to evaluate the costs and benefits of different launch scenarios for the PFO program based on updated satellite life expectancies, NOAA concurred and noted in its letter that it had evaluated the costs and benefits of different launch scenarios using the latest estimates of satellite lives as part of its budget submission. We discussed this with program officials in April 2016. Program officials explained that the program determined it would minimize costs by building the satellites as soon as possible, and it would minimize risks by planning to launch the satellites at a cadence that would meet the program’s goals for a robust polar constellation. However, the agency did not provide sufficient supporting evidence or artifacts. Without documentation showing specific comparisons of options with respect to cost totals and overall risk, the assumptions NOAA used, and the processes and time frames in which NOAA’s decisions were reached, we were not able to validate the agency’s results. NOAA also stated in its letter that it will continue to update its analysis based on, among other things, updated satellite life expectancies and information gained from award of future spacecraft and instrument contracts. Doing so would help ensure that the agency is making the most efficient use of investments in the polar satellite program. NOAA also provided technical comments, which we have incorporated into our report, as appropriate. In its technical comments, NOAA officials referred to our finding that the satellite availability assessment is not consistent with the estimated life dates in its flyout chart, noting that (1) its flyout charts are not intended to depict a satellite’s estimated life, and (2) our focus on S-NPP’s 50 percent likelihood of functioning in 2020 is inappropriate because JPSS-1 will be the primary operational satellite in 2020. However, the flyout charts show “planned mission life” according to NOAA requirements. It is misleading to show a mission life extending through late 2020 if the agency’s estimate of the satellite’s health puts it at only a 50 percent likelihood of full functionality. Furthermore, while JPSS-1 should be the primary satellite and S-NPP should be a secondary satellite in 2020, the status of S-NPP’s health would become paramount if JPSS-1 experienced a failure on launch or on orbit. On March 16, 2016, an audit liaison for NASA provided an e-mail stating that the agency would provide any input it might have to NOAA for inclusion in NOAA’s comments. We are sending copies of this report to the appropriate congressional committees, the Secretary of Commerce, the Administrator of NASA, the Director of the Office of Management and Budget, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions on the matters discussed in this report, please contact me at (202) 512-9286 or at pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our objectives were to (1) evaluate the National Oceanic and Atmospheric Administration’s (NOAA) progress on the Joint Polar Satellite System (JPSS) program with respect to schedule, cost, and key risks; (2) assess NOAA’s efforts to plan and implement appropriate information security protections for polar satellite data; (3) evaluate NOAA’s efforts to assess the probability of a near-term gap in polar satellite data, as well as its progress in implementing key activities for mitigating a gap; and (4) assess NOAA’s efforts to plan and implement a follow-on polar satellite program. To evaluate NOAA’s progress on the JPSS satellite program with respect to schedule, cost, and key risks, we compared actual or anticipated completion dates for important flight and ground project milestones against previously anticipated completion dates between July 2013 and December 2015, and explored the root causes of recent delays. We also compared cost data for program instruments and other components to previous data for those same components, to determine differences over time. We compared monthly management reports on key program risks to determine the status of major remaining program risks, and to determine which risks had been closed. We also compared risk data to source documents such as risk registers. In addition, we interviewed JPSS program office staff for details on schedule, cost, and risk information. We assessed the reliability of monthly reports on the JPSS program’s schedule, cost, and risk information by comparing these data to other program artifacts and through interviews with knowledgeable officials. We found these data to be sufficiently reliable for our purposes. In order to assess NOAA’s efforts to plan and implement appropriate information security protections for polar satellite data, we compared Commerce and NOAA information security policies and JPSS program information security practices to selected Federal Information Security Modernization Act of 2014 (FISMA) requirements as well as implementing guidance from the Office of Management and Budget and the National Institute of Standards and Technology (NIST). Specifically, we assessed policies and practices in the areas outlined in NIST’s Risk Management Framework: system categorization; selection, implementation, and assessment of security controls; authorization to operate; and ongoing monitoring. We obtained and analyzed key artifacts supporting the JPSS program’s efforts to address these risk management areas, including the program’s system categorization results, the System Security Plan, the System Controls Assessment report, Authorization to Operate documentation, incident reports, and the program’s continuous monitoring plan. We interviewed key managers and staff from the JPSS program office and the NOAA Office of the Chief Information Officer to better understand their information security policies and practices. We assessed the reliability of the agency’s information on controls and vulnerabilities by comparing it to supporting documentation and artifacts, and found that the data were sufficiently reliable for our purpose of reporting on shortfalls in agency practices. To evaluate NOAA’s efforts to assess the probability of a near-term gap in polar satellite data, as well as its progress in implementing key activities for mitigating a gap, we analyzed NOAA’s methodology for determining the expected length of a potential gap and compared it against other gap estimates and availability requirements. We reviewed NOAA’s April 2015 polar satellite gap mitigation/contingency plan, and compared it to best practices in contingency planning developed by leading government and industry sources as well as shortfalls we previously identified in NOAA’s October 2012 and February 2014 contingency plans. We evaluated the status of NOAA’s gap mitigation activities. We interviewed officials from the JPSS program, as well as NOAA’s Office of Atmospheric Research, National Weather Service, and NOAA Satellite, Data, and Information Service staff for further information on satellite availability details and gap mitigation activities. We assessed the reliability of NOAA’s assessment of satellite availability by comparing it to underlying analyses, prior assessments, and shortfalls we identified on prior assessments. We found the data to be sufficiently reliable for our purpose of reporting on strengths and weaknesses of the agency’s assessment. In order to assess NOAA’s efforts to plan and implement the JPSS Polar Follow-On (PFO) program, we analyzed program documentation to determine the scope, expected cost, timelines, and key risks affecting the program. We compared this information against other NOAA and JPSS program documentation and identified key information that has yet to be completed for the PFO program. We also met with JPSS program staff for further insights on their plans for the PFO program. We conducted our work at NOAA and its component offices—including the offices of the JPSS program—and the facilities of a program contractor. We conducted this performance audit from May 2015 to May 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. protect organizational operations, assets, individuals, other organizations, and the nation from a diverse set of threats including hostile cyber-attacks, natural disasters, structural failures, and human errors. The guidance includes privacy controls to be used in conjunction with the specified security controls to achieve comprehensive security and privacy protection. In November 2015, the National Oceanic and Atmospheric Administration (NOAA) updated its assessment of the availability the existing Suomi National Polar-orbiting Partnership (S-NPP) satellite over time. The agency determined that there is an 80 percent likelihood that S-NPP will be able to provide key measurements until data from the next Joint Polar Satellite System satellite (called JPSS-1) are available, if JPSS-1 is launched in March 2017 and available to begin operation in September 2017 (see figure 9). In addition to the contact named above, Colleen Phillips (Assistant Director), Shaun Byrnes (Analyst-in-Charge), Chris Businsky, Kara Lovett Epperson, Torrey Hardee, Franklin Jackson, and Lee McCracken made key contributions to this report.
NOAA established the JPSS program in 2010 to replace aging polar satellites and provide critical environmental data used in forecasting the weather. However, the potential exists for a gap in satellite data if the current satellite fails before the next one is operational. Because of this risk and the potential impact of a gap on the health and safety of the U.S. population and economy, GAO added this issue to its High Risk list in 2013, and it remained on the list in 2015. GAO was asked to review the JPSS program. GAO's objectives were to (1) evaluate progress on the program, (2) assess efforts to implement appropriate information security protections for polar satellite data, (3) evaluate efforts to assess and mitigate a potential near-term gap in polar satellite data, and (4) assess agency plans for a follow-on polar satellite program. To do so, GAO analyzed program status reports, milestone reviews, and risk data; assessed security policies and procedures against agency policy and best practices; examined contingency plans and actions, as well as planning documents for future satellites; and interviewed experts as well as agency and contractor officials. The $11.3 billion Joint Polar Satellite System (JPSS) program has continued to make progress in developing the JPSS-1 satellite for a March 2017 launch. However, the program has experienced recent delays in meeting interim milestones, including a key instrument on the spacecraft that was delivered almost 2 years later than planned. In addition, the program has experienced cost growth ranging from 1 to 16 percent on selected components, and it is working to address selected risks that have the potential to delay the launch date. Although the National Oceanic and Atmospheric Administration (NOAA) established information security policies in key areas recommended by the National Institute of Standards and Technology, the JPSS program has not yet fully implemented them. Specifically, the program categorized the JPSS ground system as a high-impact system, and selected and implemented multiple relevant security controls. However, the program has not yet fully implemented almost half of the recommended security controls, did not have all of the information it needed when assessing security controls, and has not addressed key vulnerabilities in a timely manner (see figure). Until NOAA addresses these weaknesses, the JPSS ground system remains at high risk of compromise. NOAA has made progress in assessing and mitigating a near-term satellite data gap. GAO previously reported on weaknesses in NOAA's analysis of the health of its existing satellites and its gap mitigation plan. The agency improved both its assessment and its plan; however, key weaknesses remain. For example, the agency anticipates that it will be able to have selected instruments on the next satellite ready for use in operations 3 months after launch, which may be optimistic given past experience. GAO is continuing to monitor NOAA's progress in addressing prior recommendations. Looking ahead, NOAA has begun planning for new satellites to ensure data continuity. This program would include two new JPSS satellites and a smaller interim satellite. However, uncertainties remain on the expected useful lives of the current satellites, and NOAA has not evaluated the costs and benefits of different launch scenarios based on up-to-date estimates. Until it does so, NOAA may not be making the most efficient use of the nation's sizable investment in the polar satellite program. GAO recommends that NOAA take steps to address deficiencies in its information security program and complete key program planning actions needed to justify and move forward on a follow-on polar satellite program. NOAA concurred with GAO's recommendations and identified steps it is taking to address them.
Mr. Chairman and Members of the Subcommittee: I am pleased to be here today to discuss the results of our review of the Credit Research Center (the Center) report on personal bankruptcy debtors’ ability to pay their debts and share with you our observations on the February 1998 Ernst & Young report that also examines debtors’ ability to pay. Both reports represent a useful first step in addressing a major public policy issue—whether some proportion of those debtors who file for personal bankruptcy under chapter 7 of the bankruptcy code have sufficient income, after expenses, to pay a “substantial” portion of their outstanding debts. Specifically, you requested that we evaluate each report’s research methodology and formula for estimating the income that debtors have available to pay debts. On February 9, 1998, we reported the results of our more extensive review of the Center report and selected data to you and the Ranking Minority Member of this Subcommittee. Debtors who file for personal bankruptcy usually file under chapter 7 or chapter 13 of the bankruptcy code. Generally, debtors who file under chapter 7 of the bankruptcy code seek a discharge of all their eligible dischargeable debts. Debtors who file under chapter 13 submit a repayment plan, which must be confirmed by the bankruptcy court, for paying all or a portion of their debts over a 3-year period unless for cause the court approves a period not to exceed 5 years. report concluded, however, that no one explanation is likely to capture the variety of reasons that families fail and file for bankruptcy. Nor is there agreement on (1) the number of debtors who seek relief through the bankruptcy process who have the ability to pay at least some of their debts and (2) the amount of debt such debtors could repay. One reason for the lack of agreement is that there is little reliable data on which to assess such important questions as the extent to which debtors have an ability to pay their eligible dischargeable debts; the amount and types of debts that debtors have voluntarily repaid under chapters 7 and 13; the characteristics of chapter 13 repayment plans that were and were not successfully completed; and the reasons for the variations among bankruptcy districts in such measures as the percentage of chapter 13 repayment plans that were successfully completed. Several bills have been introduced in Congress that would implement some form of “needs-based” bankruptcy. These include S.1301, H.R. 2500, and H.R. 3150. All of these bills include provisions for determining when a debtor could be required to file under chapter 13, rather than chapter 7. Currently, the debtor generally determines whether to file under chapter 7 or chapter 13. Each bill would generally establish a “needs-based” test, whose specific provisions vary among the bills, that would require a debtor to file under chapter 13 if the debtor’s net income after allowable expenses would be sufficient to pay about 20 percent of the debtor’s unsecured nonpriority debt over a 5-year period. If the debtor were determined to be unable to pay at least 20 percent of his or her unsecured nonpriority debt over 5 years, the debtor could file under chapter 7 and have his or her eligible debts discharged. Another bill, H.R. 3146, focuses largely on changes to the existing “substantial abuse” provisions under section 707(b) of the bankruptcy code as the means of identifying debtors who should be required to file under chapter 13 rather than chapter 7. The Center report and Ernst & Young reports attempted to estimate (1) how many debtors who filed for chapter 7 may have had sufficient income, after expenses, to repay at “a substantial portion” of their debts and (2) what proportion of their debts could potentially be repaid. The Center report was based on data from 3,798 personal bankruptcy petitions filed principally in May and June 1996 in 13 of the more than 180 bankruptcy court locations. The petitions included 2,441 chapter 7 and 1,357 chapter 13 petitions. On the basis of the Center report’s assumptions and the formula used to determine income available for repayment of nonpriority, nonhousing debt, the report estimated that 5 percent of the chapter 7 debtors in the 13 locations combined could, after expenses, repay all of their nonpriority, nonhousing debt over 5 years; 10 percent could repay at least 78 percent; and 25 percent could repay at least 30 percent. The Center report also estimated that about 11 percent of chapter 13 debtors and about 56 percent of chapter 7 debtors were expected to have no income available to repay nonhousing debts. Ernst & Young’s report was based on a sample of 5,722 chapter 7 petitions in four cities—Los Angeles, Chicago, Boston, and Nashville—that were filed mainly in 1992 and 1993. Ernst & Young concluded that, under the needs-based provisions of H.R. 3150, from 8 to 14 percent (average 12 percent) of the chapter 7 filers in these four cities would have been required to file under chapter 13 rather than chapter 7, and could have repaid 63 to 85 percent (average 74 percent) of their unsecured nonpriority debts over a 5 year repayment period. The report concluded that its findings corroborated the Center report’s findings that “a sizeable minority of chapter 7 debtors could make a significant contribution toward repayment of their non-housing debt over a 5-year period.” discussed our observations about the report with the Ernst & Young study author. It is important to note that the findings of both the Center report and Ernst & Young report rest on fundamental assumptions that have not been validated. Both studies share two fundamental assumptions: (1) that the information found on debtors’ initial schedules of estimated income, estimated expenses, and debts was accurate; and (2) that this information could be used to satisfactorily forecast debtors’ income and expenses for a 5-year period. These assumptions have been the subject of considerable debate, and the researchers did not test their validity. With regard to the first assumption, the accuracy of the data in bankruptcy petitioners’ initial schedules of estimated income, estimated expenses, and debts is unknown. Both reports assumed that the data in these schedules are accurate. However, both reports also stated that to the extent the data in the schedules were not accurate, the data would probably understate the income debtors have available for debt repayment. This reflected the researchers’ shared belief that debtors have an incentive in the bankruptcy process to understate income, overstate expenses, and thereby understate their net income available for debt repayment. However, there have been no studies to validate this belief. It is plausible that, to the extent there are errors in the schedules, debtors could report information that would have the effect of either overstating or understating their capacity to repay their debts, with a net unknown bias in the aggregate data reported by all debtors. One cause of such errors could be that the schedules are not easily interpreted by debtors who proceed without legal assistance. In Los Angeles, a location whose data contributed significantly to the findings of both reports, Center data showed that about one-third of debtors reported they had not used a lawyer. repayment. Neither report allowed for situations in which the debtor’s income decreases or expenses increase during the 5-year period. Past experience suggest that not all future chapter 13 debtors will successfully complete their repayment plans. To the extent this occurs, it would reduce the amount of debt that future debtors repay under required chapter 13 repayment plans. A 1994 report by the Administrative Office of the U.S. Courts found that only about 36 percent of the 953,180 chapter 13 cases terminated during a 10-year period ending September 30, 1993, had been successfully completed. The remaining 64 percent were either dismissed or converted to chapter 7 liquidation, in which all eligible debts were discharged. The reasons for this low completion rate are unknown, but this illustrates the high level of discrepancy between the amount that debtors could potentially repay, based on the data and assumptions used in the two reports, and what has occurred over a 10-year period. Another assumption made in both reports is that 100 percent of debtors’ income available for debt repayment will be used to repay debt for a 5-year period. This assumption does not reflect actual bankruptcy practice. Chapter 13 repayment plans require greater administrative oversight, and thus cost more than chapter 7 cases, including periodic review of the debtors progress in implementing the plan and review of debtors’ or creditors’ requests to alter the plan. In fiscal year 1996, for example, creditors received about 86 percent of chapter 13 debtor payments. The remaining 14 percent of chapter 13 debtor payments were used to pay administrative costs, such as statutory trustee fees and debtor attorneys’ fees. Neither study addressed the additional costs for judges and administrative support requirements that would be borne by the government should more debtors file under chapter 13. nation as a whole or of each location for the year from which the samples were drawn. Therefore, the data on which the reports were based may not reflect all bankruptcy filings nationally or in each of the 15 locations for the years from which the petitions were drawn. One difference between the two reports involves the calculation of debtor expenses. The Center’s estimates of debtor repayment capacity are based on the data reported in debtors’ initial schedules of estimated income, estimated expenses, and debts. The Center report calculated debtor expenses using the data reported on debtors’ estimated income and estimated expense schedules. The Ernst & Young report, whose purpose was to estimate the effect of implementing the provisions of H.R. 3150, adjusted debtors’ expenses using the provisions of H.R. 3150. Following these provisions, Ernst & Young used the expenses debtors reported on their schedules of estimated expenses for alimony payments, mortgage debt payments, charitable expenses, child care, and medical expenses. For all other expenses, including transportation and rent, Ernst & Young used Internal Revenue Service (IRS) standard expense allowances, based on both family size and geographic location. The impact of these adjustments on debtors’ reported expenses was not discussed in the report. However, to the extent these adjustments lowered debtors expenses, they would have increased the report’s estimates of debtors’ repayment capacity when compared to the methodology used in the Center report. To the extent the adjustments increased debtors’ reported expenses, they would have decreased the report’s estimates of debtor repayment capacity. Also, to the extent that these adjustments reduced debtors’ reported expenses, the adjustments would have corrected, at least in part, for what the report assumed was debtors’ probable overstatement of expenses on their schedules of estimated expenses. pay. Conversely, to the extent that actual family size was smaller than these averages, the report overstated allowable expenses, and thus understated the debtors’ ability to pay. A third difference between the reports involves assumptions about repayment of secured, nonhousing debt. The Center report assumed that debtors would continue payments on their mortgage debt and pay their unsecured priority debt. Unlike the Center report, the Ernst & Young report appears to have assumed that debtors will repay, over a 5-year period, all of their secured nonhousing debt and all of their unsecured priority debt. The purpose of this assumption was to estimate the amount of unsecured nonpriority debt that debtors’ could potentially repay after paying their secured nonhousing debt and unsecured priority debt. On March 10, 1998 we received an Ernst & Young report that used a national sample of chapter 7 petitions from calendar year 1997 to estimate debtors’ ability to pay. Although we have not had an opportunity to examine this report in detail, the report appears to have addressed many of the sampling issues we raised regarding the Center report and February 1998 Ernst & Young report. However, the March 1998 Ernst & Young report shares the fundamental unvalidated assumptions of the Credit Center report and the February 1998 Ernst & Young report. These assumptions include (1) the data reported on debtors’ schedules of estimated income, estimated expenses, and debts are accurate; (2) the data in these schedules can be used to satisfactorily forecast debtors’ income and expenses for a 5-year period; (3) that 100 percent of debtors’ net income after expenses, as determined in the report, will be used for debt repayment over a 5-year repayment period; and (4) that all debtors will satisfactorily complete their 5-year repayment plans. be more or less than the estimates in these two studies. Similarly, the amount of debt these debtors could potentially repay could also be more or less than the reports estimated. Finally, although the March 1998 Ernst & Young report is based on what is apparently a national representative sample of chapter 7 petitions, to the extent that the report is based on the same basic data (petitioners financial schedules) and assumptions as the Center report and the February 1998 Ernst & Young report, it shares the same limitations as these two earlier reports. This concludes my prepared statement, Mr. Chairman. I would be pleased to answer any questions you or other members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the results of its review of the Credit Research Center report on personal bankruptcy debtors' ability to pay their debts and observations on the February 1998 Ernst & Young report that also examines debtors' ability to pay. GAO noted that: (1) both studies share two fundamental assumptions that: (a) the information found in debtors' initial schedules of estimated income, estimated expenses, and debts is accurate; and (b) this information could be used to satisfactorily forecast debtors' income and expenses for a 5-year period; (2) these assumptions have been the subject of considerable debate, and the researchers did not test their validity; (3) with regard to the first assumption, the accuracy of the data in bankruptcy petitioners' initial schedules of estimated income, estimated expenses, and debt is unknown; (4) however, both reports also stated that to the extent the data in the schedules were not accurate, the data would probably understate the income debtors have available for debt repayment; (5) with regard to the second assumption, there is also no empirical basis for assuming that debtors' income and expenses, as stated in their initial schedules, would remain stable for a 5-year period following the filing of their bankruptcy petitions; (6) these two assumptions--debtors' income and expenses remain stable and all repayment plans would be successfully completed--could result in a somewhat optimistic estimate of debt repayment; (7) neither report allowed for situations in which the debtor's income decreases or expenses increase during the 5-year period; (8) one difference between the two reports involve the calculation of debtor expenses; (9) a second difference between the two reports involves the calculation of mortgage debt and family size; (10) a third difference between the reports involves assumptions about repayment of secured, nonhousing debt; (11) on March 10, 1998, GAO received an Ernst & Young report that used a national sample of Chapter 7 petitions from calendar year 1997 to estimate debtors' ability to pay; (12) the report appears to have addressed many of the sampling issues GAO raised regarding the Center report and February 1998 Ernst & Young report; and (13) however, the March 1998 Ernst & Young report shares the fundamental unvalidated assumptions of the Credit Center report and the February 1998 Ernst & Young report.
The JSF program is DOD’s largest cooperative program. It is structured on a multitiered set of relationships involving both government and industry from the United States and eight allied nations—the United Kingdom, Italy, the Netherlands, Turkey, Denmark, Norway, Canada, and Australia. These relationships are shown in figure 1. The JSF program structure was established through a framework memorandum of understanding (MOU) and individual supplemental MOUs between each of the partner country’s defense department or ministry and DOD, negotiating on behalf of the U.S. government. These agreements identify the roles, responsibilities, and expected benefits for all participants. The current negotiated agreement covers only the system development and demonstration phase, and participation now does not guarantee participation in future phases. The program intends to produce three fighter variants to meet multiservice requirements: conventional flight for the Air Force, short take-off and vertical landing for the Marine Corps, and carrier operations for the Navy. As currently planned, the program will cost about $200 billion to develop and procure about 2,600 aircraft and related support equipment. In October 2001, DOD awarded Lockheed Martin Aeronautics Company a contract for the system development and demonstration phase. Pratt and Whitney and General Electric were awarded contracts to develop the aircraft engines. This phase is estimated to last about 10 years and cost about $33 billion; it will involve large, fixed investments in human capital, facilities, and materials. The next significant knowledge point will be a critical design review, currently planned for July 2005. At that time, the aircraft design should be stable and engineering drawings should be available to confirm that the design performs acceptably and can be considered mature. The United States and its partners expect to realize a variety of benefits from cooperation on the JSF program. The United States expects to benefit from partner contributions and potential future aircraft sales; access to partner industrial capabilities; and improved interoperability with partner militaries once the aircraft is fielded. Partner governments expect to benefit financially and obtain an aircraft they could not afford to develop on their own. Partners also expect to benefit from increased access to JSF program data, defined influence over aircraft requirements, and technology transfers to their industries from U.S. aerospace companies. For the partners, industrial return, realized through JSF subcontract awards, is critical for their continued participation in the program. According to DOD and the program office, through its cooperative agreements, the JSF program contributes to armaments cooperation policy in the following four areas: Political/military–expanded foreign relations. Economic–decreased JSF program costs from partner contributions. Technical–increased access to the best technologies of foreign partners. Operational–improved mission capabilities through interoperability with allied systems. DOD and the JSF Program Office expect to benefit financially from direct partner contributions and through aircraft purchased by partners and other international buyers, which reduces overall unit cost. Foreign countries become program partners at one of three participation levels, based on financial contribution, which the United States uses to defray program costs. For the current system development and demonstration phase, partner governments have committed to provide over $4.5 billion to the JSF program and are expected to purchase 722 aircraft once the aircraft enters the production phase. According to DOD, foreign military sales to nonpartner countries could include an additional 1,500 to 3,000 aircraft. Expected partner financial contributions and aircraft purchases are detailed in table 1. Contributions can be financial or nonfinancial. For example, Turkey’s system development and demonstration contribution was all cash. Denmark contributed $110 million in cash, and also the use of an F-16 aircraft and related support equipment for future JSF flight tests and the use of North Atlantic Treaty Organization command and control assets for a JSF interoperability study, which were valued to be worth an additional $15 million to the program. In addition, U.S. industry cooperation with aerospace suppliers in partner countries is expected to benefit the JSF program because of the specific advanced design and manufacturing capabilities available from those suppliers. For example, British industry has a significant presence in the program with BAE Systems as a teammate to Lockheed Martin and Rolls Royce as a major engine subcontractor. In addition, Fokker Aerostructures in the Netherlands is under contract to develop composite flight doors for the JSF airframe. Partner governments expect to benefit financially by leveraging significant U.S. resources and inventory requirements to obtain an advanced tactical aircraft they could not afford to develop on their own. From a government perspective, Level I and II partners have been guaranteed waivers of nonrecurring aircraft costs; Level III partners will be considered for a similar waiver. All partners are also eligible to receive potential levies collected on future foreign military sales of aircraft to nonpartner customers. In addition, and in most cases more importantly, partners have identified industrial return to in-country suppliers as vital to their participation in the program. In a recent study assessing the financial impact of the JSF program on international suppliers, DOD reported that partners could potentially earn between $5 and $40 of revenue in return for each dollar contributed to the program. Through government and industrial participation, partner countries also expect to benefit from the technology transferred from U.S. to partner industry through JSF contract awards. Partners expect that early participation in the JSF program will improve their defense industrial capability through increased access to design, technical, and manufacturing data and through the ability to perform advanced planning for operation and support of the JSF once it is delivered in their respective countries. Involvement in the early phases of the JSF program has provided partners with information on the development of aircraft requirements, program costs and schedules, and logistics concepts. International partners have access to program and technology information through participation on senior-level management decision-making bodies, representation in the JSF Program Office, and involvement on program integrated product teams. Partner program office personnel, regardless of participation level, have equal access to most information. Partner staff can request information from integrated product teams on which they have no membership, as long as the information is not restricted from being released to their countries. International program participants have significant expectations regarding government and industry return based on their contributions. As such, the JSF Program Office and Lockheed Martin are faced with balancing these expectations against other program goals. Recent actions by Lockheed Martin to address partner concerns could represent a departure from the JSF competitive contracting approach and result in increased program costs. International participation in the program also presents a challenge because the transfer of technologies necessary to achieve DOD’s goals for aircraft commonality is expected to far exceed past transfers of advanced military technology. Further, export authorizations for critical suppliers need timely planning, preparation, and disposition to help avoid schedule delays in the program and ensure partners the opportunity to bid for contracts. DOD and the JSF Program Office have said that the use of competitive contracting is central to meeting partner expectations for industrial return and will assist in controlling program costs. JSF officials use the term “best value” to describe this approach, which is a departure from other cooperative development programs that guarantee pre-determined levels of works based on contribution. Partner representatives generally agree with the JSF competitive approach to contracting, but some emphasize that their industries’ ability to win JSF contracts whose total value approaches or exceeds their financial contributions for the JSF system development and demonstration phase is important for their continued involvement in the program. The program office and the prime contractor have a great deal of responsibility for providing a level playing field for JSF competitions, including visibility into the subcontracting process and opportunities for partner industries to bid on subcontracts. To that end, Lockheed Martin performed assessments for many of the partners to determine the ability of their industries to compete for JSF contracts. The results of these assessments in some cases showed potential return that far exceeded country contribution levels. In some cases, Lockheed Martin then signed agreements with partner governments and suppliers to document the opportunities they would have to bid for JSF contracts, as well as the potential value of those contracts. DOD and the JSF Program Office have left implementation of the competitive contracting approach to Lockheed Martin whose decisions will therefore largely determine how partner expectations are balanced against program goals. In at least one case, Lockheed Martin has promised an international contractor predetermined work that satisfies a major portion of that country’s expected return-on-investment. While disavowing knowledge of the specific contents of any such agreement, DOD was supportive of their use during partner negotiations. DOD officials conceded that the agreements contained in these documents departed from the competitive approach. However, the agreements were necessary to secure political support in some countries, since the U.S. government does not guarantee that the partners will recoup their investment through industry contracts on the JSF program. In addition, Lockheed Martin has recently developed a plan to use “strategic best value sourcing” to supplement its original competitive approach. According to DOD, this plan will allow for a limited number of work packages to be directly awarded to industry in partner countries where contract awards to date have not met expectations. While there are predetermined cost goals under these strategic awards, there are concerns from some partners that this is a departure from the competitive approach and, in fact, a move toward prescribed work share. Because Lockheed Martin makes the subcontracting decisions, it bears the primary responsibility for managing partner expectations—in addition to duties associated with designing, developing, and producing the aircraft. Lockheed Martin’s actions seem to indicate a response to partner concerns about return-on-investment expectations and a desire to ensure continued partner participation. Most partners have a clause in their agreements that allow for withdrawal from this phase of the program if industrial participation is not satisfactory. If a partner decided to leave the program, DOD would be deprived of the additional development funding expected from that partner. Lockheed Martin could be faced with lower than projected international sales, resulting in fewer units sold. At the same time, directed work share often results in less than optimal program results. For example, other coproduction programs such as the F-16 Multinational Fighter, which employ the traditional work share approach, often pay cost premiums in terms of increased manufacturing costs associated with use of foreign suppliers. The United States has committed to design, develop, and qualify aircraft for partners that fulfill the JSF operational requirements document and are as common to the U.S. JSF configuration as possible within National Disclosure Policy. DOD and the JSF Program Office must balance partner expectations for commonality against the transfer of U.S. military technology. Decisions in this area will be critical because the extent of technology transfers necessary to achieve program goals will push the boundaries of U.S. disclosure policy for some of the most sensitive U.S. military technology. To address these issues, Lockheed Martin has a contract requirement to conduct a study to develop a partner JSF specification that fulfills commonality goals. Due to issues related to the disclosure review process, the contractor expects to deliver the study to the program office in August 2003, 5 months later than originally planned. According to DOD, the program has requested exceptions from National Disclosure Policy in some cases to achieve aircraft commonality goals and avoid additional development costs. Some DOD officials told us that technology transfer decisions have been influenced by JSF program goals, rather than adjusting program goals to meet current disclosure policy. DOD, JSF Program Office, and Lockheed Martin officials agreed that technology transfer issues should be resolved as early as possible in order to meet program schedules without placing undue pressure on the release process. The program has taken steps to address potential concerns, including chartering a working group to review how past export decisions apply to the JSF program; identify contentious items in advance; and provide workable resolutions that minimize the impact to the program cost, schedule, or performance. However, partners have expressed concern about the pace of information sharing and decision making related to the JSF support concept. For example, according to several partners, greater access to technical data is needed so that they can plan for and develop a sovereign support infrastructure as expressed in formal exchanges of letters with the United States. The JSF program is conducting trade studies to further define the concept for how the JSF will be maintained and supported worldwide so that it can start to address these issues. According to program officials, this strategy will identify the best approach for maintaining JSF aircraft, and it may include logistics centers in partner countries. Follow-on trade studies would determine the cost of developing additional maintenance locations. The implementation of the global support solution and the options identified in follow-on trade studies will have to be in full compliance with the National Disclosure Policy, or the program will need to request exceptions. Authorization for export of JSF information to partners and international suppliers also present challenges for the program. In addition to the U.S. government determining the level of disclosure for partners and technology areas, JSF contractors must receive authorization to transfer data and technology through the export control process. Due to the degree of international participation at both a government and an industry level, a large number of export authorizations are necessary to share project information with governments, solicit bids from partner suppliers, and execute contracts. The JSF Program Office and Lockheed Martin told us that there were over 400 export authorizations and amendments granted during the JSF concept demonstration phase, and they expect that the number of export authorizations required for the current phase could exceed 1,000. Lockheed Martin officials told us that an increased level of resources has been required to address licensing and other export concerns for the program. Export authorizations for critical suppliers need to have timely planning, preparation, and disposition to help avoid schedule delays and cost increases in the program. Without proper planning, there could be pressure to expedite reviews and approvals of export authorizations to support program goals and schedules. In addition, advanced identification of potential alternative sources for critical contracts could be an appropriate action to prevent schedule delays in the event of unfavorable approval decisions. Although it is required to do so, Lockheed Martin has not completed a long-term industrial participation plan that provides information on JSF subcontracting. Such a plan could be used to anticipate export authorizations needed for international suppliers and identify potential licensing concerns far enough in advance to avoid program disruption or accelerated licensing reviews. Our work has shown that past cooperative programs have experienced cost and schedule problems as a result of poor planning for licenses. For example, like the JSF, the Army’s Medium Extended Air Defense System program involves several sensitive technologies critical to preserving the U.S. military advantage. That program failed to adequately plan for release requirements related to those technologies and saw dramatic increases in approval times, which affected contractors’ ability to use existing missile technology and pursue the cheapest technical solution. Timely disposition of export authorizations is also necessary to avoid excluding partner industries from competitions. While Lockheed Martin has stated that no foreign supplier has been excluded from any of its competitions or denied a contract because of fear of export authorization processing times or the conditions that might be placed on an authorization, the company is concerned this could happen. In fact, one partner told us that export license delays have had a negative effect on the participation of its companies because some U.S. subcontractors have been reluctant to take on the added burden of the license process. The U.S. subcontractors must apply for the export authorization on behalf of the foreign supplier, which can add time and expense to their contracts. Further, we were told that some partner companies have been unable to bid due to the time constraints involved in securing an export license. The JSF program has attempted to address the additional administrative tasks associated with export authorizations by adding resources to help prepare applications and exploring ways to streamline the process. For example, Lockheed Martin received a global project authorization (GPA)—an “umbrella” export authorization that allows Lockheed Martin and other U.S. suppliers on the program to enter into agreements with over 200 partner suppliers to transfer certain technical data—from the Department of State. Approved in October 2002, implementation of the GPA was delayed until March 2003 because of supplier concerns related to liability and compliance requirements. In March 2003, the first GPA implementing agreement between Lockheed Martin and a company in a partner country was submitted and approved in 4 business days. JSF partners have expressed dissatisfaction with the time it has taken to finalize the conditions under which the GPA can be used and disappointment that the authorization may not realize their expectations in terms of reducing the licensing burdens of the program. As currently structured, the GPA does not cover the transfer of any classified information or certain unclassified, export-controlled information in sensitive technology areas such as stealth, radar, and propulsion. The Joint Strike Fighter program, and its implications for acquisition reform and cooperative development, is a good test of whether the desire for better outcomes can outweigh traditional management pressures. In our 2001 review of JSF technical maturity, we employed knowledge standards consistent with best practices and DOD acquisition reforms and found that several technologies critical to meeting requirements were not sufficiently mature. The best practice for such a decision is to have a match between technologies and weapon requirements. At its recent preliminary design review, the JSF program uncovered significant problems with regard to various issues, including aircraft weight, design maturity, and weapons integration. Such problems have historically resulted in increased program costs, longer development schedules, or a reduction in system capabilities. While such actions can negatively affect the U.S. military services, the impact may be more substantial for partners because they have less control over program decisions and less ability to adjust to these changes. This may affect partners’ participation in the program in a variety of ways. First, the continued affordability of the development program and the final purchase price are important for partners—both of which could be affected by recent technical problems. There is no guarantee that partners will automatically contribute to cost overruns, especially if the increase is attributable to factors outside their control. Therefore, future cost increases in the JSF program may fall almost entirely on the United States because there are no provisions in the negotiated agreements requiring partners to share these increases. Partner representatives indicated that they intend to cooperate with the JSF Program Office and Lockheed Martin in terms of sharing increased program costs when justified. However, some partner officials expressed concern over the tendency of U.S. weapon system requirements to increase over time, which results in greater risk and higher costs. While some partners could fund portions of cost overruns from military budgets if requested, others told us that even if they were willing to support such increases, these decisions would have to be made through their parliamentary process. DOD has not required any of the partners to share cost program increases to date. For example, cost estimates for the system development and demonstration phase have increased on multiple occasions since the program started in 1996. During that time, the expected cost for this phase went from $21.2 billion to $33.1 billion as a result of scope changes and increased knowledge about cost. According to DOD, partners have not been required to share any of these costs because the changes were DOD directed and unrelated to partner actions or requirements. To encourage partners to share costs where appropriate, the United States has said it will consider past cost sharing behavior when negotiating MOUs for future phases of the program. If a partner refuses to share legitimate costs during the system development and demonstration phase, the United States can use future phase negotiations to recoup all or part of those costs. In these instances, the United States could reduce levies from future sales, refuse to waive portions of the nonrecurring cost charges for Level III partners, or in a worst case, choose not to allow further participation in the program. However, DOD officials have not committed to using these mechanisms to encourage cost sharing. Therefore, DOD may be forced to choose between accepting the additional cost burden and asking for additional partner contributions—which could jeopardize partner support for the program. The JSF program is not immune to unpredictable cost growth, schedule delays, and other management challenges that have historically plagued DOD’s systems acquisition programs. International participation in the program, while providing benefits, makes managing these challenges more difficult and places additional risk on DOD and the prime contractor. While DOD expects international cooperation in systems acquisition to benefit future military coalition engagements, this may come at the expense of U.S. technological and industrial advantages or the overall affordability of the JSF aircraft. Over the next 2 years, DOD will make decisions that critically affect the cost, schedule, and performance of the program. Because Lockheed Martin bears the responsibility for managing partner industrial expectations, it will be forced to balance its ability to meet program milestones and collect program award fees against meeting these expectations—which could be key to securing future sales of the JSF for the company. In turn, DOD must be prepared to assess and mitigate any risks resulting from these contractor decisions as it fulfills national obligations set forth in agreements with partner governments. While some steps have been taken to position the JSF program for success, given its size and importance, additional attention from DOD and the program office would help decrease the risks associated with implementing the international program. In the report we are releasing today, we recommend that DOD ensure that the JSF Program Office and its prime contractors have sufficient information on international supplier planning to fully anticipate and mitigate risk associated with technology transfer and that information concerning the selection and management of suppliers is available, closely monitored, and used to improve program outcomes. Toward this end, DOD and the JSF Program Office need to maintain a significant knowledge base to enable adequate oversight and control over an acquisition strategy that effectively designs, develops, and produces the aircraft while ensuring that the strategy is carried out to the satisfaction of the U.S. services and the international partners. Tools are in place to provide this oversight and management, but they must be fully utilized to achieve program goals. DOD concurred with our report recommendations, agreeing to (1) ensure that Lockheed Martin’s JSF international industrial plans are continually reviewed for technology control, export control, and risk mitigation issues and (2) work with Lockheed Martin to achieve effective program oversight when it comes to partner expectations and program goals. While we commend this proactive response, we note that DOD did not provide any detail as to the criteria to be employed for reviewing industrial plans. In addition, DOD did not specify how it plans to collect and monitor information in suppliers or elaborate on other steps the JSF Program Office would take to identify and resolve potential conflicts between partner expectations and program goals. Through decisions made on the Joint Strike Fighter program today, DOD will also influence other acquisition programs like the Missile Defense Agency’s suite of land, sea, air, and space defense systems and the Army’s Future Combat System. These programs will potentially shape budgetary and strategic military policy for the long term, and as such, need to use every tool available for success. Adopting knowledge-based policies and practices with regard to these critical acquisition programs is an important first step to ensuring that success. Mr. Chairman, that concludes my statement. I will be happy to respond to any questions you or other Members of the Subcommittee may have. For future questions regarding this testimony, please contact Katherine Schinasi, (202) 512-4841. Individuals making key contributions to this testimony include Tom Denomme, Brian Mullins, and Ron Schwenn. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Joint Strike Fighter (JSF) is a cooperative program between the Department of Defense (DOD) and U.S. allies for developing and producing next generation fighter aircraft to replace aging inventories. As currently planned, the JSF program is DOD's most expensive aircraft program to date, costing an estimated $200 billion to procure about 2,600 aircraft and related support equipment. Many in DOD consider JSF to be a model for future cooperative programs. To determine the implications of the JSF international program structure, GAO identified JSF program relationships and expected benefits, and assessed how DOD is managing challenges associated with partner expectations, technology transfer, and recent technical concerns. The JSF program is based on a complex set of relationships among governments and industries from the United States and eight partner countries. The program is expected to benefit the United States by reducing its share of program costs, giving it access to foreign industrial capabilities, and improving interoperability with allied militaries. Partner governments expect to benefit financially and technologically through relationships with U.S. aerospace companies and access to JSF program data. Yet international participation also presents a number of challenges. Because of their contributions to the program, partners have significant expectations for financial returns, technology transfer, and information sharing. If these expectations are not met, their support for the program could deteriorate. To realize these financial returns, partners expect their industry to win JSF contracts through competition--a departure from cooperative programs, which directly link contract awards to financial contributions. However, recent actions by the prime contractor could indicate a departure from this competitive approach and a return to directed work share. Technology transfer also presents challenges. Transfers of sensitive U.S. military technologies--which are needed to achieve aircraft commonality and interoperability goals--will push the boundaries of U.S. disclosure policy. In addition, a large number of export authorizations are needed to share project information and execute contracts. These authorizations must be submitted and resolved in a timely manner to maintain program schedules and ensure partner industry has the opportunity to compete for subcontracts. Finally, recent technical challenges threaten program costs and possibly partner participation in the program. While partners can choose to share any future program cost increases, they are not required to do so. Therefore, the burden of any future increases may fall almost entirely on the United States. If efforts to meet any of these partner expectations come into conflict with program cost, schedule, and performance goals, the program office will have to make decisions that balance these potentially competing interests within the JSF program.
State and local allocating agencies are responsible for day-to-day administration of the LIHTC program based on Section 42 and Treasury regulations. More specifically, allocating agencies are responsible for (1) awarding their tax credits to qualifying projects that meet their QAP, (2) determining the value of the tax credits awarded to projects, and (3) monitoring project compliance following the award of credits. Figure 1 provides an overview of the key responsibilities of an allocating agency from application to the end of the compliance period for an LIHTC development. Agencies receive allocations of tax credits and award the credits to specific projects that meet requirements of Section 42. An allocating agency develops the QAP and receives approval of the plan by the governmental unit of which the allocating agency is a part. The agency then evaluates the proposed projects against the approved QAP. The QAP also must be developed in accordance with Section 42 requirements for such plans. Section 42 requires that QAPs give preference to certain projects; specifically, those that serve the lowest-income tenants, are obligated to serve qualified tenants for the longest periods, and are located in qualified census tracts and the development of which contributes to a concerted community revitalization plan. QAPs also must incorporate certain “selection criteria” (but are not limited to these criteria). Specifically, under Section 42, the plans must consider housing needs characteristics; project characteristics (including whether the project uses existing housing as part of a community revitalization plan); tenant populations with special housing needs; public housing waiting lists; tenant populations of individuals with children; projects intended for eventual tenant ownership; energy efficiency of the project; and historic nature of the project. Finally, allocating agencies, when awarding tax credits, are responsible for meeting other Section 42 requirements relating to developers, the affordability period of projects, project viability, and written communication with the public. Specifically, allocating agencies must allocate at least 10 percent of the state housing credit ceiling to projects involving qualified nonprofit organizations; execute an extended low-income housing commitment of at least 30 years (of which the first 15 years is the compliance period) before a building can receive credits; require developers to hire an agency-approved third party to conduct a comprehensive market study of the housing needs of low-income individuals in the area to be served by the project before the credit allocation is made; provide a written explanation to the general public if the agency makes an allocation that is not in accordance with established priorities and selection criteria; and notify the chief executive officer (or the equivalent) of the local jurisdiction where the building is located, and provide the official a reasonable opportunity to comment on the project. To select projects for tax credits, allocating agencies receive and evaluate detailed proposals that developers submit to develop new housing or acquire and rehabilitate existing housing. The project owners agree to set aside a certain percentage of the units with rents affordable to qualifying low-income households for at least 30 years. In return, tax credit investors can earn a tax credit over a 15-year period (the compliance period) if they meet the affordability requirements, but can claim the credit over an accelerated time frame (the 10-year credit period), beginning in the year in which the property is placed in service (ready for occupancy) or, if the investor chooses, the succeeding tax year. IRS can recapture some or all of the credits if requirements during the compliance period have not been met. The amount of the tax credits awarded to a project generally is based on the eligible basis (total allowable costs associated with depreciable costs in the project). Additionally, the allocating agency is to provide no more credits than it deems necessary to ensure the project’s financial feasibility through the 10-year credit period. To determine financial feasibility, Section 42 requires that allocating agencies consider the reasonableness of developmental and operational costs, any proceeds or receipts expected to be generated through the tax benefit, and the percentage of credit amounts used for project costs other than the cost of intermediaries (such as syndicators). Section 42 also requires an allocating agency to evaluate available private financing and other federal, state, and local subsidies a developer plans to use and adjust the award accordingly. Allocating agencies must review costs to determine the credit amount at three points in time: application (when the proposal is submitted), allocation (when the agency commits to providing credits to a specific project), and placed-in-service (when the project is ready for occupancy under state and local laws). The allocating agency also must report the allocated amount of tax credits available over a 10-year credit period for each building in a project on IRS Form 8609 (credit allocation and certification form). After credits are awarded, Treasury regulations state that allocating agencies must conduct regular site visits to physically inspect units and review tenant files for eligibility information. As shown in figure 1, initial inspections must be conducted by the end of the second calendar year following the year in which the last building of the development was placed in service. Subsequent inspections must take place at least once every 3 years, starting from the initial inspection. During the inspections, allocating agencies must randomly select the units and records to be inspected and reviewed. The agencies also have reporting and notification requirements. For example, allocating agencies must notify IRS of any noncompliance found during inspections and ensure that owners of LIHTC properties annually certify that they met certain requirements for the preceding 12-month period. If a property is not in compliance with the provisions of Section 42, allocating agencies must provide written notice to owners and file an IRS Form 8823 (report of noncompliance or building disposition) no later than 45 days after the end of the correction period, whether or not the noncompliance or failure to certify has been corrected. Agencies also must report a summary of compliance monitoring activities annually on IRS Form 8610 (low-income housing credit agencies report). The design of the LIHTC program (such as the roles of investors and syndicators) can result in other entities providing additional types of monitoring of LIHTC projects. Investors and syndicators may provide project oversight to help ensure that they receive the expected tax credits over the designated period. For instance, investors and syndicators may maintain a list of properties (based on identified performance measures) to more closely monitor. IRS administers the LIHTC program primarily within one division, with assistance from other offices and units. The Small Business/Self-Employed Division primarily administers the LIHTC program. One full-time program analyst develops internal protocols, provides technical assistance to allocating agencies, and provides community outreach to industry groups and taxpayers (developers/owners and investors). The Low-Income Housing Credit Compliance Unit in Philadelphia, Pennsylvania, assists in determining if tax returns may warrant an audit and populates IRS’s Low-Income Housing Credit database. The database has been used to record information from certain IRS forms that allocating agencies or taxpayers submit (such as Form 8823, which we discuss later in this report). The Office of Chief Counsel provides technical assistance for the LIHTC program and determines the amount of credit available for the national pool. The pool consists of additional credits that qualified states can use in a calendar year—these are credits that were unused in the prior year and thus “carried over” into a new year. Based on our review of 58 QAPs and our site visits, we found the QAPs did not consistently contain, address, or mention preferences and selection criteria required in Section 42, but we found that some allocating agencies incorporated the information into other LIHTC program documents, or implemented the requirements in practice. Specifically, 23 of 58 QAPs we analyzed contained references to all required preferences and selection criteria. Of the 35 QAPs that did not contain references to all required preferences and selection criteria, 5 were from the selected agencies that we visited. All five of these agencies provided us with documentation that demonstrated that these requirements were being implemented. For example, Michigan’s scoring criteria attachment to their LIHTC application included several requirements that were not found in their QAP. As another example, although Nevada’s QAP did not include selection criteria related to public housing waiting lists, officials from the agency illustrated how they met this requirement by including an attachment to their application package that requires the developer to certify that it will notify public housing agencies of the project’s availability for tenants on public housing waiting lists. The remaining 30 agencies (which we did not visit) also may have documented the information elsewhere. For example, for several plans with missing Section 42 requirements, we were able to find evidence that these required items were listed or referenced in other publicly available sources. Consistent with our previous report, IRS officials stated that they did not regard a regular review of QAPs as part of their responsibilities as outlined in Section 42 and therefore, did not regularly review the plans. IRS officials said that allocating agencies have primary responsibility to ensure that the plans meet Section 42 preferences and selection criteria. According to Section 42, allocating agencies must use a QAP that has been approved by the governmental unit of which the agency is a part but the Code does not specify that the unit must check for all required preferences and selection criteria. IRS officials noted that review of a QAP to determine if the plan incorporated the elements specified in Section 42 could occur if an allocating agency were to be audited. IRS has conducted seven audits of allocating agencies since the inception of the program and found issues related to QAPs, including missing preferences and selection criteria, lack of an updated plan, and incorrect paraphrasing of Section 42 requirements. For these audits, IRS recommended that the agencies update their QAPs to address the identified deficiencies. As a result of IRS’s lack of regular oversight of the allocating agencies, we concluded in July 2015 that IRS is not well positioned to provide this type of oversight because of its tax compliance mission and recommended that Congress consider designating HUD as a joint administrator of the program to better align program responsibilities with each agency’s mission and more efficiently address existing oversight challenges, including a lack of regular review of QAPs. However, to date, no action has been taken to address this recommendation. While Section 42 specifies some selection criteria (such as project location, tenant populations with special housing needs, and the energy efficiency of the project), it also more broadly states that a QAP set forth selection criteria “which are appropriate to local conditions.” As a result, allocating agencies have the flexibility to create their own methods and rating systems for evaluating applicants. Fifty-four of the 58 QAPs we reviewed cited the use of points or thresholds (minimum requirements) to weight, evaluate, and score applications against certain criteria and factors (see table 1). Nearly all the QAPs we reviewed referenced scoring criteria for the qualifications of the development team. For example, allocating agencies can award points based on the team’s demonstrated successful experience in developing tax credit projects, as well as the physical and financial condition of other properties they developed. Agencies also commonly used energy efficiency as a criterion. This category encompassed green building practices, including the design of buildings in accordance with green standards, as well as use of energy- and water-efficient fixtures. Additionally, over one-third of the QAPs reviewed cited letters of support from local governments. (We discuss letters of support in more detail in the next section.) Allocating agencies typically ranked applications and reserved credits based on the needs of the state after scoring applications. Several allocating agencies with which we met said they have established allocation pools based on the geographic area of the project or development characteristics to help ensure that affordable housing needs are met in those areas. If applications receive the same score, these allocating agencies have established different kinds of tiebreakers to decide which applicant would receive the tax credits. For example, one of California’s tiebreakers is a ratio that compares funds from federal or local government subsidies a developer expects to finance the project with total development costs. Allocating agencies also can implement a qualitative evaluation system that uses rankings and recommendations to evaluate applications. For example, the allocating agency from Chicago reviews submitted applications using internal guidelines based on the agency’s underwriting standards and project feasibility criteria, and chooses which developments to recommend for LIHTC awards. Two of the nine agencies we visited that used a qualitative ranking or recommendation-based system in 2013 noted that they were considering (Chicago) or had already switched (Rhode Island) to a point-based scoring system. Some allocating agencies we visited evaluate applications with the goal of selecting projects for which to reserve future years’ credits, a practice termed “forward reserving.” While Section 42 and Treasury regulations allow such reserving, credits only can be allocated to projects in the calendar year in which the projects are placed in service. Officials from California noted that forward reserving helped ensure the agency would be eligible for the national pool of tax credits. Other agencies noted that they reserved credits for planning purposes. For example, Chicago’s allocating agency has decided to reserve 5 years’ worth of credits to build a pipeline of projects with which to work. Chicago officials stated that a multiple-year queue allows them to better plan their allocations based on affordable housing needs in their jurisdiction. Because of this practice, Chicago does not hold competitive funding rounds every year. According to Section 42, allocating agencies must notify the chief executive officer (or the equivalent) of the local jurisdiction in which the project is to be located, and provide the official with a reasonable opportunity to comment on the proposed project. Some agencies also imposed an additional requirement of local letters of support that have raised fair housing and other concerns. For example, some allocating agencies give points to developers that have letters of local government support as part of their application. These agencies require a signed letter of support (from a chief elected or administrative official of the community in which the project would be sited) that specifically endorses the proposed project. Based on our review of 58 QAPs, we found that 12 agencies noted that their review or approval of applications was contingent on letters of support from local officials. Another 10 agencies awarded points for letters of local support. Six of the nine agencies we visited had selection criteria in their 2013 QAPs that stated that letters of local support would affect the agency’s review of the application or result in point awards or deductions. According to officials from these six agencies, there are various advantages to using this criterion. For example, officials from Massachusetts told us the letters indicate a project will move more quickly through the development process, which includes local zoning and permitting, than a project without local support. However, the officials also said that an applicant could be awarded credits without a letter if all other threshold and scoring requirements were met. Furthermore, officials from Chicago’s allocating agency noted that the letters were evidence of support for the proposed development from the surrounding community and they continued to use the letters as a threshold item upon which tax credit awards were based. Four of the allocating agencies we visited that used letters of support as scoring criteria in 2013 (Nevada, Rhode Island, Virginia, and Washington, D.C.) had concerns with this additional requirement and took steps or were planning to change how the letters were used for LIHTC projects. For example, officials from Virginia’s allocating agency noted that they stopped awarding points for the letters after being notified that local officials were choosing developments they wanted to support based on personal preferences. As of 2014, Virginia stopped awarding points for local letters of support but began deducting up to 25 points for negative letters if, after further analysis, the state determined the claims of negative effect were valid. Additionally, officials from Nevada said that they changed their requirements because they became aware of the difficulties developers in rural areas faced in receiving letters of support (due to local officials’ fear of losing elections if affordable housing were built in their districts). As of 2015, Nevada no longer required letters of local support; instead the agency notifies local jurisdictions and provides them with an opportunity for comment. In Texas, concerns also have been raised about the requirement, but its allocating agency continues to require letters of support. Specifically, in 2013, the state’s Sunset Advisory Commission recommended eliminating letters of support from state senators and representatives because the commission believed the letters gave too much power to officials far removed from the process. In 2010, a Texas developer was convicted on corruption charges, which included supplying a below market-rate apartment to a state representative in exchange for the representative’s support for the developer’s projects. There is also ongoing litigation about the requirement for letters of local support that alleges that Treasury did not issue any regulations to prevent state actions that contribute to perpetuating racial segregation of LIHTC units and that this is a violation of its obligation to affirmatively further fair housing under the Fair Housing Act. The litigation specifically alleges that in 2013 the Texas legislature enacted two statutes that give substantial control over the location of LIHTC projects to local municipal and county government, one of which requires the allocating agency to provide a high number of points to developers that receive the explicit approval of the relevant municipal or local government. According to the lawsuit, Section 42 gives Treasury the authority to regulate such local government restrictions, but the agency has not issued regulations or otherwise prevented states from enacting such policies. Officials from Treasury’s Office of Tax Policy said they could not comment on ongoing litigation. Moreover, research conducted by HUD and others has analyzed how scoring criteria (like letters of local support) can influence project location and HUD officials have expressed fair housing concerns about these letters. Specifically, officials from HUD’s Office of Fair Housing and Equal Opportunity and Office of General Counsel have cited fair housing concerns in relation to any preferences or requirements for local approval or support because of the discriminatory influence these factors could have on where affordable housing is built. In 2013, HUD and other participants in the Rental Policy Working Group—which was established by the White House to better align the operation of federal rental policies across the administration—shared these concerns with Treasury. These HUD officials suggested that eliminating local approval or support requirements or preferences from QAPs should be top priorities for Treasury and IRS, based on fair housing concerns. As of January 2016, neither Treasury nor IRS had issued any guidance about letters of local support, and Treasury’s Priority Guidance Plan does not include any plans to address HUD’s recommendation. Treasury officials said they could not comment or take action on matters related to the ongoing litigation. In addition, research from HUD’s Office of Policy Development and Research has explored the relationship between tax credit allocation priorities as outlined in QAPs (such as local letters of support or approval) and the location of LIHTC units. For example, one HUD report found that certain state QAP prioritization of local approval exhibited increases in the overall exposure to poverty of LIHTC units. Furthermore, a report by the Poverty and Race Research Action Council found that local approval requirements beyond the required Section 42 notification provide municipalities with an opportunity to “opt out” of developing LIHTC projects. Allocating agencies we visited had processes in place to meet other Section 42 requirements relating to awarding credits, long-term affordability of projects, project viability (market studies), and written explanation to the public. Allocating agencies must allocate at least 10 percent of the state housing credit ceiling to projects involving qualified nonprofit organizations. All nine allocating agencies we visited had a set-aside of at least 10 percent of credits to be awarded to projects involving nonprofits. Some agencies choose to reserve more than 10 percent. For example, the allocating agencies from Virginia and Chicago reserve 15 percent and 30 percent of their tax credits for qualified nonprofits, respectively. Officials from Illinois’s allocating agency mentioned that almost every application has a nonprofit partner and therefore the minimum set-asides are fairly easy to meet. Allocating agencies must execute an extended low-income housing commitment of at least 30 years (the first 15 years of which are the compliance period) before a building can receive credits. Allocating agencies with which we met also used various tools when awarding credits to maintain the affordability of LIHTC projects beyond the 30-year extended-use period. One allocating agency we visited requires developers to sign agreements for longer extended-use periods, while some agencies award points to applications whose developers elect longer periods. For example, California’s allocating agency has a minimum affordability period of 55 years, 25 years longer than the 30- year requirement. Other allocating agencies, including those from Massachusetts, Virginia, Nevada, and California, award extra points to developers that elect affordability periods beyond the 30-year minimum. Nevada’s allocating agency noted that it was challenging to preserve the affordability of LIHTC units due to the qualified contract process outlined in Section 42. Under the process, owners of properties subject to an extended-use restriction may seek to remove the restriction for maintaining affordability after the first 15 years (compliance period) by requesting that the allocating agency find an eligible buyer for the property. The agency has 1 year to find a potential buyer that will maintain the property’s affordability and present an offer in accord with qualified contract provisions. If the allocating agency cannot find a buyer that will offer a qualified contract, then the current owner is entitled to be relieved of LIHTC affordability restrictions (which phase out over 3 years after the 15-year compliance period ends). Officials from Nevada mentioned that their larger projects (more than 200 units) were at risk of losing affordability because of the qualified contract process. Specifically, when the qualified contract price exceeds a development’s market value, it is difficult for the agency to find a buyer for the above-market price. The officials suggested that in such cases, the development should be priced according to the market or fair value price to attract more buyers willing to preserve the affordability of the properties. One way we observed that allocating agencies can maintain LIHTC properties’ affordability is to restrict owners from using the qualified contract process. For example, in Michigan, the allocating agency has restricted owners from using the qualified contract process by limiting their ability to remove affordability restrictions. Before a credit allocation is made, allocating agencies must receive from the developer a comprehensive market study of the housing needs of low-income individuals in the area to be served by the project. An agency- approved third party must perform the study and the developer must pay for it. Eight of the nine allocating agencies we visited require the market study to be submitted with a developer’s application to ensure the agency can review the study during its evaluation to award and reserve credits. One agency (Rhode Island) requires the study to be submitted after credits are reserved, but evaluates it before allocation. Officials noted that their agency is familiar with state housing needs because the market is small and a market study is not necessarily needed to make a decision about reserving credits (versus allocation). Two of the nine allocating agencies we visited had agency-specific requirements for procurement of market studies. For example, Michigan chooses a firm on behalf of the applicant and has the developer pay for the study. Agency officials noted that this process increases the independence of the market analysis and lessens any potential conflicts of interest. Rhode Island also commissions the market study (by itself or in partnership with the investor). According to Section 42, allocating agencies must provide a written explanation to the general public if they make an allocation not in accordance with established priorities and selection criteria. The allocating agencies we visited met this requirement in varying ways. For example, two agencies, including Michigan, chose to release a memorandum to the public describing the specific circumstances of an allocation. The other agency, California, provided us with an example of a public memorandum detailing how the agency used forward reserving— that year’s credits already were allocated for the area in which the proposed development would be located—because the agency saw merit in the proposed development. Virginia made publicly available meeting minutes that discussed decisions not made in accordance with established priorities. The remaining six agencies we visited (Chicago; Illinois; Massachusetts; Nevada; Rhode Island; and Washington, D.C.) had not issued a public notification because officials said their agencies had never allocated credits not in accordance with established priorities and selection criteria. Section 42 states that allocating agencies must consider the reasonableness of costs and their uses for proposed LIHTC projects, allows for agency discretion in making this determination, and also states that credits allocated to a project may not exceed the amount necessary to assure its feasibility and its viability as a low-income housing project. Section 42 does not provide a definition or offer guidance on determining how to calculate these amounts. All nine allocating agencies we visited require applicants to submit detailed cost and funding estimates, an explanation of sources and uses, and expected revenues as part of their applications. These costs are then evaluated to determine a project’s eligible basis (total allowable costs associated with depreciable costs in the project), which in turn determines the qualified basis and ultimately the amount of tax credits to be awarded. More specifically, the agencies we visited used different methods for determining the amount of LIHTCs to award. Six agencies (California, Illinois, Michigan, Nevada, Virginia, and Washington, D.C.) determined credit amounts explicitly in their application reviews by comparing the award amount calculated from the qualified basis with the amount calculated based on the project’s existing equity gap and awarding the lesser of the two. In other words, agencies reviewed cost information to determine the annual amount of tax credits needed to fill the gap in financing. These six agencies documented their calculations and award amounts in the project application and review files. The other three agencies (Chicago, Massachusetts, and Rhode Island) determined credit amounts similarly by reviewing financial information from developers, but did not explicitly compare the equity gap and qualified basis to determine award credit amounts. Instead, officials told us that underwriters reviewed this information and assessed if the amounts were reasonable based on their internal underwriting criteria to make award decisions. Section 42 also does not provide a definition of reasonableness of costs, giving allocating agencies discretion on how best to determine what costs are appropriate for their respective localities. In addition, Section 42 does not require criteria for assessing costs to be documented in QAPs. To update its best practices in light of the Housing and Economic Recovery Act (HERA) of 2008 and the American Recovery and Reinvestment Act of 2009, NCSHA provided allocating agencies with recommended practices, including recommendations on cost limits, credit award amounts, and on fees associated with construction in allocating housing credit and underwriting projects in 2010. However, allocating agencies have different ways for determining the reasonableness of project costs. More specifically, based on our analysis of 58 QAPs and our site visits, agencies have established various limits against which to evaluate the reasonableness of submitted costs, such as applying limits on development costs, total credit awards, developer fees, and builder’s fees. Limits to development costs. NCSHA recommends that each allocating agency develop a per-unit cost limit standard based on total development costs. Fourteen of the 58 QAPs we reviewed stated that total development costs, development costs per unit, or development costs per square foot were assessed against limits the agencies established for these cost categories. Of the nine agencies we visited, four noted that their limits for development costs were benchmarks determined by costs of similar projects, historical pricing, and other factors. For instance, the Massachusetts QAP contains recommended per unit costs using cost information from the agency’s portfolio. The Illinois QAP contains per square foot and per unit cost limits, set on the basis of historical data and adjusted for inflation annually. Limits to total credit award. Similarly, agencies placed limits on the tax credit award amounts that taxpayers can claim per project. While NCSHA recommends that credit awards be limited to the amount needed to fill any financing gap for the project, several agencies had specific limits in their QAPs. According to our QAP analysis, 39 of the 58 noted such limits either as a specific dollar amount or as a percentage of the total amount of credits available for a given year. Officials from one agency told us they do not mention the award limit in the QAP because they did not want to encourage applicants to seek the maximum award amount. However, agency officials stated that they evaluate applications against a general maximum award amount that they do not publicize. At the nine agencies we visited, the maximum amount taxpayers can claim over the 10-year credit period ranged from $1 million to $2.5 million per project. Limits to fees for developers. The developer fee—payment made to the developer for its services—is included in the eligible basis. Because the developer fee is included in the eligible basis from which the credit award is ultimately calculated, limits on the fee can help maintain reasonable costs. NCSHA guidance states that the fee should not exceed 15 percent of total development costs, except for developments meeting specified criteria (for size, characteristics, or location) that could cause fees to be higher. Based on our analysis of 2013 QAPs, 40 of 58 agencies specified limits on the value and calculation of developer fees. Some allocating agencies cited limits as the lesser of a specific dollar value or a percentage based on the number of units in a development. For example, the Michigan QAP notes that developer fees can be no higher than the lesser of 15 percent of total development costs or $2.5 million for buildings with 50 or more units; higher limits (20 percent) may be used for buildings with 49 units or fewer to create incentives for developers. Other agencies calculate the fee limit differently, using a percentage of total development cost minus costs such as acquisition, reserves, or syndication. Three of the agencies we visited had no developer fee limits in their QAPs, but two had limits in supplemental documentation that is publicly available. Limits to fees for builders. Agencies also may elect to place limits on builder’s fees. A builder’s fee is a payment made to the builder and is included in eligible basis from which the credit award is ultimately calculated. Similar to the limits on the developer fees, limits on builder’s fees can help maintain costs. Builder’s profit, builder’s overhead, or general requirements are common components of builder’s fees. NCSHA recommends that builder’s profit not exceed 6 percent of construction costs, builder’s overhead not exceed 2 percent of construction costs, and general requirements not exceed 6 percent of construction costs. NCSHA notes that the limits should not be exceeded except for developments with characteristics that may justify higher fees (such as small size or location in difficult development areas). Based on our QAP analysis, we found that 34 of 58 noted limits on builder’s fees, but the value and calculations varied. Some agencies elected to aggregate the fee components into one fee limit and others set limits for each component of the fees. We also found that few QAPs (4 of 58) cited specific circumstances under which developments could exceed cost or credit award limits, such as the developer demonstrating need. However, we found that eight of the nine allocating agencies we visited had policies where applicants could exceed limits that were specified in their QAPs or internal documents. Section 42 requires allocating agencies to review cost information and determine the credit amount at three different points of time: application, allocation, and placed-in-service and agencies we visited had different practices for meeting Treasury requirements at each stage. With regard to reviewing costs at the time of application, as we previously discussed, all nine agencies we visited require applicants to submit detailed cost and funding estimates, an explanation of sources and uses, and expected revenues as part of their applications. The allocating agencies then evaluate the submitted cost estimates based on their established limits and benchmarks for reasonableness, and the total tax credit award amount is calculated. “Allocation” occurs when a project is selected for a tax credit award and credits are set aside for that specific developer as work on the project begins. Based on our site visits and project file reviews, the nine agencies we visited told us that they would respond in different ways if costs previously reported in a developer’s application increased. Five agencies explicitly stated that award amounts would not increase beyond the amount determined at application, although awards could decrease if costs were lower than initially estimated. Four others stated that award amounts could rise after application due to cost increases. The “placed-in-service” date is when the first unit of the building is ready and available for occupancy under state and local laws. Section 42 states that a project must be placed-in-service by the end of the calendar year in which the tax credits were allocated. A few allocating agencies require in their QAP that developers submit periodic progress reports to better ensure that the development will be placed-in-service on time. According to our QAP analysis, 7 of 58 plans required developers or owners to submit reports at regular intervals during construction to monitor progress. Five agencies we visited stated that they monitored construction progress, and one explicitly described requirements in its QAP. In addition to progress reports, the others cited practices such as scheduled meetings with construction staff and visits to project sites as ways to monitor construction progress to ensure that the placed-in- service deadlines would be met. If the project cannot be placed-in-service by that deadline, developers can apply for a “carryover allocation” which, if approved, extends the deadline to be placed in service. Specifically, the project will have to be placed in service no later than the end of the second calendar year after the agency approves the carryover request. Section 42 requires proof that at least 10 percent of reasonably expected basis in the project was spent in the 12 months after the execution of a carryover allocation. Treasury regulations state that allocating agencies may verify this in several ways, including having a requirement that projects requesting a carryover allocation must submit an independent report on the progress of construction spending to the allocating agency. The procedures we observed at all nine agencies we visited were consistent with the requirements and all required a report to document the expenditures. However, we observed that three agencies required report submission in fewer than 12 months following allocation, a more stringent time frame than currently in Section 42. Two of these agencies said their deadlines were more stringent in order to give them enough time to review costs and provide developers an incentive to start construction earlier. Section 42 notes that an increase or “boost” of up to 130 percent in the eligible basis can be awarded by an allocating agency to a housing development in a qualified census tract or difficult development area. Although the boost is applied to the total eligible basis (as opposed to the total credit amount), the credit amount awarded increases. In addition, HERA amended Section 42 in 2008 and gave allocating agencies the discretion to designate any building, regardless of location, as eligible for a boost of up to 130 percent of the eligible basis. Section 42 requires allocating agencies to find that “discretionary basis boosts” were necessary for buildings to be financially feasible before granting them to developers. Section 42 does not require allocating agencies to document their analysis for financial feasibility (with or without the basis boost). However, HERA’s legislative history included expectations that allocating agencies would set standards in their QAPs for which projects would be allocated additional credits, communicate the reasons for designating such criteria, and publicly express the basis for allocating additional credits to a project. In addition, NCSHA recommends that allocating agencies set standards in their QAPs to determine eligibility requirements for discretionary basis boosts (those outside of qualified census tracts and difficult development areas) and make the determinations available to the public. According to our QAP analysis, 44 of 58 plans we reviewed included criteria for awarding discretionary basis boosts, with 16 plans explicitly specifying the use of basis boosts for projects that need them for financial or economic feasibility. Additionally, of the 53 project files we reviewed for cost information during our site visits, 7 received a discretionary basis boost. The discretionary boosts were applied to different types of projects (for example, historic preservation projects, projects in high- foreclosure areas, or projects with enhanced environmental standards) and on different scales (for example, statewide or citywide). In some cases, discretionary boosts were applied more broadly. For example, during our file review in Virginia, we found one development that received a boost to the eligible basis for having received certain green building certifications, although the applicant did not demonstrate financial need or request the boost. The allocating agency told us that all projects that earned the specified green building certifications received the boost automatically, as laid out in its QAP. As mentioned previously, Virginia compares (1) the award amount calculated from the qualified basis with (2) the amount calculated based on the project’s existing equity gap, and subsequently awards the lesser of the two. In this case, because the application showed that the project’s equity gap was still less than the credit amount with the basis boost, the allocating agency awarded a credit amount equal to the equity gap. In response to our findings during the file review, officials from Virginia’s allocating agency said that the agency has since changed its practices to prevent automatic basis boosts from being applied and now requires additional checks for financial need for boosts. Furthermore, one 2013 QAP we reviewed (Arizona) described an automatic 130 percent statewide boost for all LIHTC developments. Agency officials told us they first applied the boost in 2009, when credit pricing was low. According to the officials, the automatic statewide basis boost remains in effect because officials have made the determination that nearly all projects will need it for financial feasibility due to limited gap financing resources. More specifically, resources decreased when the state legislature decided to use part of the housing trust fund for other uses. The agency’s 2015 QAP outlines goals for providing low-income housing in areas with high market demand where the land is frequently more expensive. All the projects in the most recent competitive funding round (2015) are expected to receive the 130 percent boost. Consistent with our previous report, IRS does not review the criteria allocating agencies use to award the boosts (most of which are found in their QAPs). IRS also has not provided guidance to agencies on how to determine the need for the additional basis to make the project financially feasible. IRS officials stated that Section 42 gives allocating agencies the discretion to determine if projects receive a basis boost and does not require documentation of financial feasibility. Additionally, IRS officials explained that because the overall amount of subsidies allocated to a state is limited, the inherent structure of the program discourages states from oversubsidizing projects, since doing so would reduce the amount of the remaining allocable subsidies and yield fewer LIHTC projects overall within a state. However, we observed a range of practices for awarding discretionary basis boosts, including a blanket basis boost that could result in fewer projects being subsidized and provide more credits than are necessary for financial feasibility. In addition, because IRS does not regularly review QAPs, many of which list criteria for discretionary basis boosts, IRS is unable to determine the extent to which agency policies could result in oversubsidizing of projects. In our previous report, we concluded that IRS’s oversight of allocating agencies and the program was minimal and recommended that Congress consider designating HUD as joint administrator of the program based partly on its experience in administering other affordable housing programs. We continue to believe that if the program were jointly administered, HUD would be in a better position (given its housing mission) to provide guidance on discretionary basis boosts and regularly review allocating agencies’ criteria for awarding them. Allocating agencies are responsible for monitoring the compliance of LIHTC properties and agencies we visited had processes consistent with Section 42 and Treasury regulation requirements. However, agencies we visited had varying practices for submitting noncompliance information to IRS using the Form 8823 (report of noncompliance or building disposition). Furthermore, when IRS receives forms, it records little of this information into its database. IRS also does not review forms with certain noncompliance issues for audit potential. HUD, through the Rental Policy Working Group, has started to collect physical inspection results of LIHTC properties electronically, but the division within IRS responsible for the LIHTC program was unaware of this effort. Allocating agencies we visited had processes for and conducted compliance monitoring of projects consistent with Section 42 and Treasury regulation requirements. Treasury regulations require allocating agencies to conduct on-site physical inspections for at least 20 percent of the project’s low-income units and file reviews for the tenants in these units at least once every 3 years. In addition, allocating agencies must annually review owner certifications that affirm that properties continue to meet LIHTC program requirements. Allocating agencies we visited followed regulatory requirements on when to conduct physical inspections and tenant file reviews. Based on our site visits, five of the nine agencies conducted inspections and file reviews once every 3 years. The remaining four agencies (Chicago, Michigan, Nevada, and Rhode Island) conducted inspections and file reviews more frequently than required. Officials from Nevada noted that inspecting properties annually helped to detect possible issues in properties earlier. In addition, officials from Chicago, Michigan, and Rhode Island said they inspect properties more frequently due to monitoring requirements associated with other public subsidies that funded the development. For example, projects funded by HUD’s HOME Investment Partnerships Program (HOME) can require inspections every 1, 2, or 3 years, depending on the size of the project. Because HOME is often used as another financing source within an LIHTC development, these agencies said they chose to inspect projects every year to satisfy both HOME and LIHTC requirements. Treasury regulations also allow agencies to delegate compliance monitoring functions to a private contractor as long as the allocating agency retains the responsibility for notifying IRS about noncompliance. Two agencies, Michigan and Massachusetts, contracted monitoring to third-party firms due to agency preference to use contractors and resource constraints. In addition, Treasury regulations require that the allocating agency ensure that its authorized delegate (third-party contractor) properly performs the delegated functions. Both agencies’ contracts with the third parties outlined responsibilities, time frames, and performance reports to the allocating agency. For instance, Massachusetts receives quarterly and annual performance reports for all inspections and Michigan has contractors upload inspection findings to an electronic database for review. Agencies we visited generally used electronic databases to track the frequency of inspections, file reviews, and certifications, although most agencies documented these reviews (such as inspection checklists and file review worksheets) on paper. Based on our review, we found that seven of the nine agencies maintained databases that compliance staff used to record inspections and file reviews, follow up on findings, and track deadlines for owners to correct noncompliance issues. The remaining two agencies kept and updated spreadsheets that included similar information. In addition, agencies we visited generally had processes to help ensure and improve the reliability, accuracy, and completeness of database information. For example, officials from Virginia noted that they have started combining databases that contain information on compliance with databases that contain application information to make their datasets more complete. All agencies we visited had inspection and review processes in place to monitor projects following the 15-year compliance period, as required under Section 42. As we previously mentioned, allocating agencies must execute an extended low-income housing commitment to remain affordable for a minimum of 30 years before a tax credit project can receive credits. After the compliance period is over, the obligation for allocating agencies to report to IRS on compliance issues ends and investors are no longer at risk for tax credit recapture. Four agencies (California; Michigan; Nevada; and Washington, D.C.) also chose to reduce various requirements for compliance monitoring in this time frame, such as the percentage of units sampled or the frequency of review. For example, during the extended-use period, Michigan officials stated that they will conduct physical inspections once every 5 years rather than once every 3 years. Although investors are not at risk for tax credit recapture after the 15-year compliance period, agencies we visited have implemented policies to encourage compliance during the extended-use period. Specifically, all nine agencies established criteria that deduct points from or affect a developer’s future application if prior LIHTC developments had noncompliance issues during and beyond the 15-year compliance period. The agencies noted that this practice was a useful tool for promoting compliance as long as developers were interested in future projects. Treasury regulations require allocating agencies to use IRS Form 8823 (report of noncompliance or building disposition) to notify IRS of noncompliance with LIHTC provisions or any building disposition. Treasury regulations also state that agencies must report any noncompliance issues of which they become aware, including through physical inspections and tenant file reviews. The regulations also require that an allocating agency submit a form regardless of whether the owner remedied the noncompliance. That is, allocating agencies must send IRS forms with information on both uncorrected and corrected noncompliance issues. As of April 2016, IRS had received approximately 214,000 Form 8823s since calendar year 2009 (an average of nearly 27,000 forms a year). As shown in figure 2, the form includes information on the number of LIHTC units in the building, dates of noncompliance, and a list of categories to describe the type of noncompliance. The form also includes checkboxes to indicate if the noncompliance was corrected by the end of the correction period (the time given to the owner to correct the noncompliance issue) or remained uncorrected. IRS developed guidelines for allocating agencies to use when completing the Form 8823, the “fundamental purpose” of which was identified as providing standardized operational definitions for the noncompliance categories listed on the form. The IRS guide adds that it is important that noncompliance be consistently identified, categorized, and reported. The guide notes that the benefits of consistency included enhanced program administration by IRS. In addition, according to Standards for Internal Control in the Federal Government, information should be recorded and communicated to management and others who need it in a form that enables them to carry out internal control and other responsibilities. Management also should ensure there are adequate means of communicating with, and obtaining information from, external stakeholders that may have a significant impact on the agency achieving its goals. However, agencies we visited had various practices for submitting Form 8823 to IRS, including different timing of submissions and amounts of additional detail provided. For example, California, Virginia, and Rhode Island will not send a Form 8823 for minor violations of the Uniform Physical Conditions Standards (UPCS)—such as peeling paint or missing lightbulbs—if the violations were corrected during the inspection. Officials from these agencies stated they chose not to send forms for such minor findings because of the administrative burden this creates for the agency, developers, and IRS. In contrast, Michigan, Nevada, and Washington, D.C., will send a form (following notification to the owner) for all instances of reportable noncompliance, whether or not the issue was resolved during the inspection or the correction period. Partly because of these different practices, the number of forms the nine agencies told us they sent to IRS in 2013 varied from 1 to more than 1,700 (see table 2). Agencies we visited also submitted different amounts of information to accompany the Form 8823s. According to the IRS guide, agencies do not have to describe the noncompliance, but if they submit information with the form, IRS suggests that it is helpful to identify the unit number, the date out of compliance and the date corrected, and summarize the problems with a brief description. A majority of the agencies we visited send attachments when submitting Form 8823. For instance, Virginia submits the form with an attachment that includes inspection dates, types of credits, units reviewed, annual amount of allocation, and explanation of noncompliance. In contrast, Michigan sends its forms with an attachment that specifies the unit number but not the specific noncompliance issue, and Washington, D.C. does not send attachments. The timing of actual submission of forms to IRS also varied among agencies we visited. Treasury regulations require agencies to file a form no later than 45 days after the end of the correction period. Six agencies (Virginia, Illinois, Michigan, Massachusetts, Rhode Island, and Nevada) followed this time frame and sent forms to IRS on a rolling basis. The remainder waited until certain points in time to submit the forms. For example, California, Chicago, and Washington, D.C., sent forms on a monthly, annual, and biannual basis, respectively. For one of our selected agencies (Illinois), the timing of submissions to IRS was affected by staff turnover and the implementation of a new software program. Because of these changes, officials from this agency noted they had a backlog of tenant file reviews from 2013 and 2014 to assess for noncompliance and estimated that they would send Form 8823s to IRS for any previously identified issues by June 2016. Once the allocating agencies submit noncompliance information on Form 8823 to IRS, this federal tax information is protected by law. IRS cannot share the outcomes of the reported issues with the allocating agencies or any federal agency without taxpayer consent. All allocating agencies with which we met confirmed that IRS does not provide them with information about recapture or resolution of issues after a Form 8823 has been submitted. Factors that contributed to the variety of agency submission practices include conflicting guidance, different interpretations of the guidance, and lack of IRS feedback about agency submissions. For example, although Treasury regulations require allocating agencies to submit a form for any violation and regardless of whether the owner remedied the noncompliance, the IRS Guide for Completing Form 8823 notes that professional judgment should be used by allocating agency officials to identify “significant noncompliance issues.” IRS officials told us they are not communicating with agencies regarding form submission practices or the application of the IRS guide. Moreover, IRS officials were aware that agencies might interpret the guidance differently, but were not aware of the varying interpretations and submission rates among agencies because, as we describe in more detail in the following section, IRS uses and analyzes little of the information collected on the Form 8823. Without IRS clarification of when to send in the Form 8823, allocating agencies will continue to submit inconsistent noncompliance data to IRS, which will make it difficult for IRS to efficiently distinguish between minor violations and severe noncompliance, such as properties with health and safety issues. Furthermore, collaboration with the allocating agencies and Treasury would help IRS to obtain stakeholder perspectives about noncompliance reporting and ensure that any new guidance is consistent with Treasury regulations. IRS has assessed little of the noncompliance information collected on the Form 8823 or routinely used it to determine trends in noncompliance. Once the allocating agency decides to submit a Form 8823, it must be mailed to the IRS Low-Income Housing Credit Compliance Unit in Philadelphia, where tax examiners determine if the form should be recorded in IRS’s database as well as forwarded for audit potential review (which we discuss in the following section). IRS’s Compliance Unit captures little information from the Form 8823 submissions in its database and has not tracked the resolution of noncompliance issues or analyzed trends in noncompliance. Consistent with our previous report, during our visit to the Compliance Unit, we observed that the tax examiners focused on forms indicating a change in building disposition, such as the foreclosure of the project, and only entered information from these forms into the Low-Income Housing Credit database. As of April 2016, the database included information from about 4,200 of the nearly 214,000 Form 8823s IRS received since 2009 (less than 2 percent of forms received). Because little information is captured in the Low-Income Housing Credit database, IRS was unable to provide us with program- wide information on the most common types of noncompliance. Of the sample of files we reviewed from the agencies we visited, a majority of project files with Form 8823s filed in 2013 were submitted because of violations of the UPCS or local standards, a noncompliance category that tax examiners do not record in IRS’s database. All nine agencies we visited confirmed that physical inspection findings were the most common noncompliance issues found during their compliance reviews and recorded on the Form 8823. Furthermore, IRS tax examiners noted that there is no system to track the number and status of “uncorrected” forms (noncompliance not resolved in a specified correction period) that they receive. That is, IRS has no method to determine if issues reported as uncorrected have been resolved or if properties have recurring noncompliance issues. In addition, tax examiners noted that the different timing of submissions among agencies further affect their review of these forms. For instance, agencies that submit forms on a rolling basis require examiners to reconcile the “uncorrected” forms with the “corrected” forms (noncompliance was resolved in the correction period and the “noncompliance corrected” box was checked). Tax examiners noted that they may receive an uncorrected form for review in the morning mail and the corrected form for the same building in the afternoon mail; in the interim, a warning letter would have been mailed to the tax credit investor, although the issue was ultimately resolved. Tax examiners with whom we spoke noted that they have observed inconsistencies with submissions from the allocating agencies. However, consistent with their role in processing Form 8823s, the tax examiners said that their primary responsibility was data entry and initial review of the forms rather than influencing policies or guidance to allocating agencies regarding form submission. In our July 2015 report, we found that IRS had not comprehensively captured information on credit allocation and certification in its Low- Income Housing Credit database and recommended that IRS address weaknesses identified in data entry and programming controls to ensure reliable data are collected. In response to our recommendation, IRS officials cited that they are exploring possibilities to improve the database, which not only houses allocation and certification information, but also data from submitted Form 8823s. Specifically, IRS has been considering moving the database to a new and updated server, which will provide program managers the ability to more easily review noncompliance issues. However, this recommendation remains open. Because forms are not completely entered into the database or submitted electronically, IRS still cannot analyze noncompliance information over time, by state, by property, or by developer. IRS tax examiners are responsible for forwarding forms to be considered for audit. If IRS tax examiner staff determine that the identified noncompliance on the Form 8823 may warrant consideration of an audit of the taxpayer (that is, the project owner claiming the tax credit), they send the form and supplemental information—known as a “classification package”—to the one full-time analyst in the Small Business/Self- Employed Division for further review. The analyst then determines the audit potential. If an audit were needed, the analyst would forward the package to the relevant IRS audit examination division. However, some information from the submitted forms is not being forwarded to the analyst and such information could help identify serious noncompliance issues in the program. Since 2006, the Philadelphia Compliance Unit has reviewed Form 8823s and certain issues are to receive a “mandatory consideration of audit potential.” Tax examiners told us that they forward forms with noncompliance findings subject to mandatory consideration to the analyst in the Small Business/Self- Employed Division for review. Two noncompliance categories that are among the most directly related to the LIHTC program’s principal purpose of providing affordable housing to low-income tenants are not forwarded to the Small Business/Self-Employed Division to be considered for audit potential. Furthermore, if these types of noncompliance findings on the Form 8823 were forwarded to the analyst, it could lead to the recapture of credits. Although the Form 8823 is not the only way IRS can identify and initiate audits of taxpayers who claim LIHTCs, according to IRS officials, the majority of LIHTC-related audits of taxpayers that IRS conducted stemmed from these forms. Standards for Internal Control in the Federal Government state that information should be recorded and communicated to management and others within the entity who need it and in a form and within a time frame that enables them to carry out their responsibilities. While IRS officials were aware that they were not reviewing forms with certain noncompliance issues for audit potential, they noted that IRS lacks the resources for the one Small Business/Self-Employed Division analyst to review each form it receives and therefore, decisions were made about which noncompliance issues should be focused on when determining audit potential and forwarded to the analyst. IRS does not plan on updating the categories of noncompliance that have to be forwarded to the analyst in the Small Business/Self-Employed Division, but officials stated that IRS continuously evaluates how to most effectively apply its resources and staff to evaluating forms. However, due to inconsistencies in form submission by allocating agencies, as previously discussed, IRS practices for forwarding certain forms, and a lack of database entries for certain categories of findings, the reviews to determine audit potential are based on incomplete information. Without a better process to gather consistent noncompliance information from agencies and regularly review compliance trends, there is a significant risk that ongoing noncompliance issues in LIHTC properties may not be detected and that appropriate actions, including recapture of tax credits, will not be taken. While IRS is limited in its ability to identify continuing noncompliance issues and potential recapture events because it captures and analyzes little information it collects, HUD is building data on affordable housing that includes information about LlHTC projects. HUD’s Real Estate Assessment Center (REAC) already maintains a series of databases with information about the condition of its affordable housing portfolio, including a database of physical inspection results and a system to verify tenant incomes to accurately calculate rents. REAC collects standardized sets of information from state and local housing agencies responsible for administering HUD programs and evaluates the data collected to develop objective performance scores. HUD also analyzes the information for various purposes. Because the information is collected electronically, HUD can sort the data by state, inspection score, and property to conduct trend analyses. HUD can also disseminate information to HUD program staff and others involved in preserving affordable housing. HUD officials noted that they use these analyses to provide feedback to states about the condition of their properties. In addition, HUD officials noted that they use REAC database information when estimating future funding needs for affordable housing programs. REAC scores are published quarterly online, increasing the transparency of information about the condition of HUD’s housing portfolio. HUD officials noted that inspection findings such as health and safety deficiencies also are made available through REAC’s online portal, which state and local agencies and property owners can access. In addition to physical inspection information, HUD has experience maintaining databases to address tenant income and rent issues. Specifically, REAC maintains other databases that contain information on tenant income information and the financial condition of multifamily housing projects. In addition, HUD officials noted that REAC regularly shares data with HUD’s Office of Policy Development and Research, which conducts research on housing needs, market conditions, and outcomes of current HUD programs. According to HUD, intended results from using REAC databases include increasing the efficiency and accuracy of income and rent determinations, removing barriers to verifying tenant-reported information, and reducing the number of incidents of underreported and unreported household income. HUD’s involvement in collecting LIHTC program information likely will increase due to the use of the REAC physical inspection database in the Rental Policy Working Group’s inspection alignment initiative. Although the Rental Policy Working Group is working to address 10 areas for improving collaboration and aligning federal rental policy, the physical inspection alignment initiative has been one of the most active efforts. Because properties that have multiple federal funding sources may be subject to several physical inspections with different standards, the working group has an initiative to align inspection standards, reporting of results, frequency, and sample size of units to reduce the number of duplicative federal physical inspections for these properties. In particular, the initiative focuses on reducing the number of duplicative inspections for HUD, the Department of Agriculture (USDA), and the LIHTC program properties. In 2011, the working group launched a pilot program for aligning inspections of such properties, including those subsidized with LIHTCs. As of April 2016, HUD noted that 31 states were participating in the physical inspection pilot and that the REAC physical inspection database has been used to capture the inspection information from these states. Further, HUD officials expect participation in the pilot to eventually include all states. To bolster its data collection effort, HUD officials also said they plan to collect physical inspection data from the pilot states for properties solely subsidized by LIHTC. HUD officials noted several advantages of adding LIHTC inspection data to the REAC database, including the ability for HUD to determine regional trends in new construction or rehabilitated projects, analyze information about the types of tenants being served by the program, assess the location of LIHTC properties, and track physical inspection noncompliance trends within the program. HUD officials said this initiative will be completed in phases to address technology and data quality concerns. HUD officials noted that most allocating agencies do not have electronically generated inspection reports and HUD has been working to determine the best method for incorporating this information in the REAC database. HUD completed testing of the electronic collection of inspection reports of properties solely subsidized by LIHTC in March 2016 and plans to expand the collection of LIHTC inspection information throughout 2016. HUD officials told us that if asked, they would provide IRS with access to the database. IRS is responsible for administering the LIHTC program, but its primary division overseeing the program currently is not involved in interagency efforts to modernize, standardize, and improve compliance monitoring of properties. IRS officials from the Small Business/Self-Employed Division told us that they were not aware of HUD REAC’s databases, capabilities, or ongoing efforts to collect LIHTC inspection information through the Rental Policy Working Group. While they stated that the previous analyst was involved in the group’s early planning efforts, the Small Business/Self-Employed Division has not participated since that analyst retired and has no plans to participate in any new working group initiatives because statutory restrictions prevent them from sharing data collected on the LIHTC program with other federal agencies. Furthermore, although Treasury has been involved with the inspection alignment, officials noted that IRS’s primary role has been for the Chief Counsel to provide legal authority for LIHTC property inspections to be done using REAC inspection standards. Standards for Internal Control in the Federal Government state that management should ensure there are adequate means of communicating with, and obtaining information from, external stakeholders that may have a significant effect on the agency achieving its goals and that effective information technology management is critical to achieving useful, reliable, and continuous recording and communication of information. The Rental Policy Working Group aims to provide a forum for agencies to collaborate and achieve alignment of federal inspections of rental properties, but lack of participation by the Small Business/Self-Employed Division has resulted in IRS not being able to leverage the progress made by the working group. As we previously mentioned, IRS cannot easily discern noncompliance in the LIHTC program due to the small amount of information inputted into its database, and officials cited that IRS is considering moving the database that houses Form 8823s, among other information, to a new and updated server. However, in conjunction with the working group, HUD, USDA, and the participating allocating agencies have been working to produce and compile consistent, electronic LIHTC inspection information. Having the Small Business/Self-Employed Division participate in the working group provides IRS with opportunities to use compliance data from HUD’s database. This information and further collaboration with HUD would help IRS better understand the prevalence of noncompliance in LIHTC properties and reevaluate how the Form 8823 can be modified to better capture the most significant information from allocating agencies as well as how IRS determines which types of noncompliance issues should be considered for audit potential. Although allocating agencies play a key role in allocating tax credits, determining the reasonableness of development costs, and monitoring project compliance, IRS is the federal entity responsible for monitoring the agencies and enforcing taxpayer compliance. IRS oversight of allocating agencies continues to be minimal, particularly in reviewing QAPs and allocating agencies’ practices for awarding discretionary basis boosts. As we concluded in our July 2015 report, although LIHTC is the largest federal program for increasing the supply of affordable rental housing, it is a peripheral program for IRS in terms of resources and mission. Without regular monitoring of allocating agencies, IRS cannot determine the extent to which agencies comply with program requirements. As a result, we continue to believe, as we suggested in 2015, that Congress should consider designating HUD as a joint administrator of the program responsible for oversight due to its experience and expertise as an agency with a housing mission. For example, applying HUD’s experience in administering affordable housing programs to address areas such as QAP review, federal fair housing goals, and tenant income and rent issues would provide information, analysis, and potentially guidance on issues that apply across all allocating agencies. Our work for this review highlights the need for clarification to guidelines on submitting noncompliance information as well as further collaboration with HUD and other federal agencies to help IRS improve functions related to tax enforcement. The reasons for inconsistent reporting of noncompliance on Form 8823 include conflicting guidance, different interpretations of the guidance, and lack of IRS feedback about agency submissions. Clarifying what to submit and when—in collaboration with the allocating agencies and Treasury—could help IRS improve the quality of the noncompliance information it receives and help ensure that any new guidance is consistent with Treasury regulations. In addition, IRS has not taken advantage of the important progress HUD has made through the Rental Policy Working Group to augment its databases with LIHTC property inspection data. This data collection effort has created opportunities for HUD to share inspection data with IRS that could improve the effectiveness of reviews for LIHTC noncompliance. However, the IRS division managing the LlHTC program is not involved in the Rental Policy Working Group. Such involvement would allow IRS to leverage existing resources, augment its information on noncompliance, and better understand the prevalence of noncompliance. Specifically, IRS is missing an opportunity to identify pertinent information on LIHTC properties in REAC databases that could help IRS reevaluate how the Form 8823 can be revised to better capture the most significant information from allocating agencies. The information also could help IRS reevaluate which categories of noncompliance should be further reviewed for audit potential. GAO is making the following three recommendations: To receive more consistent information on LIHTC noncompliance, the IRS Commissioner should collaborate with the allocating agencies to clarify when allocating agencies should report such information on the Form 8823 (report of noncompliance or building disposition). The IRS Commissioner should collaborate with the Department of the Treasury in drafting such clarifications to help ensure that any new guidance is consistent with Treasury regulations. To improve IRS’s understanding of the prevalence of noncompliance in the program and to leverage existing resources, the IRS Commissioner should ensure that staff from the Small Business/Self-Employed Division participate in the physical inspection alignment initiative of the Rental Policy Working Group. To improve IRS’s processes for identifying the most significant noncompliance issues, the IRS Commissioner should evaluate how IRS could use HUD’s REAC databases, including how the information might be used to reassess reporting categories on the Form 8823 and to reassess which categories of noncompliance information have to be reviewed for audit potential. We provided a draft of this report to IRS, HUD, and Treasury for their review and comment. IRS and HUD provided written comments that are reprinted in appendixes II and III. Treasury did not provide any comments on the findings or recommendations. All three agencies provided technical comments that were incorporated, as appropriate. We also provided a draft to the National Council of State Housing Agencies (NCSHA), a nonprofit organization that represents the allocating agencies, for its review and comment. NCSHA provided written comments that are reprinted in appendix IV. IRS agreed that it should improve noncompliance reporting and data collection, but added that it would have to consider whether it has the resources to implement the recommendations. For example, IRS wrote that it would commit staff time to attend a few of the Rental Policy Working Group meetings to ascertain whether participation would be cost-effective. IRS noted that the working group was established to address fair housing concerns and cannot address tax matters. The Rental Policy Working Group is addressing 10 areas of concern, including fair housing compliance, for improving collaboration and aligning federal rental policy. However, the pilot to reduce the costs and increase the efficiency of physical inspections has been one of the most active efforts undertaken by the Rental Policy Working Group to date. Moreover, as noted in this report, the physical condition of projects is a component of program compliance, which affects taxpayers’ eligibility to claim the tax credit. IRS also stated that the REAC information is limited because not all the states are involved with the data collection effort and the REAC database contains properties that are not LIHTC properties. Although not all states are involved in the pilot to align physical inspections, the number of participating states has grown from 6 in 2011 to 31 in 2016. HUD officials expect participation in the physical inspection pilot to further expand and eventually include all states. HUD also plans to expand the electronic collection of inspection reports of properties solely subsidized by LIHTC. As we state in the report, IRS could have a better understanding of the prevalence of noncompliance by using REAC’s computerized data on and analysis of the physical condition of properties—a capability that IRS does not currently have. It could also help IRS evaluate how the Form 8823 can be revised to better capture noncompliance information from allocating agencies and help IRS determine which categories of noncompliance should be further reviewed for audit potential. While we understand that IRS has limited resources, leveraging HUD’s work with the Rental Policy Working Group pilot and accessing REAC’s computerized system could result in cost savings. IRS noted that it provides extensive information to allocating agencies through its audit technique guide, but, as we noted in the report, allocating agencies have been interpreting the guide differently, which results in the agencies inconsistently reporting the data to IRS. Additionally, allocating agencies send thousands of Form 8823s to IRS’s Low-Income Housing Credit Compliance Unit in Philadelphia that are not entered into a database or considered for audit each year. Instead, as we note in our report, many of these files are held for 3 years at the Compliance Unit and then moved to the Federal Records Center for another 15 years before being destroyed. Using REAC’s database with assistance from HUD could allow IRS to analyze noncompliance information over time, by state, by property, or by developer, which are capabilities currently unavailable to IRS. In HUD’s comments, it stated that with regard to our July 2015 recommendation calling for enhanced interagency coordination, it remains supportive of mechanisms to use its expertise and experience administering housing programs to enhance the effectiveness of LIHTC. HUD stated that it will continue its work in areas such as fair housing and physical inspection protocols in order to help the LIHTC program perform more effectively. As our report noted, applying HUD’s experience in administering affordable housing programs to address areas such as QAP review, federal fair housing goals, and tenant income and rent issues could provide information, analysis, and potentially guidance on issues that apply across all allocating agencies. In its comments, NCSHA reiterated its disagreement with our previous recommendation to Congress, noting that introducing HUD as a co- administrator would reduce program effectiveness or potentially result in HUD micromanaging allocating agency decisions. We disagree because the findings from this report highlight specific areas in which HUD would enhance the administrative support and oversight of the program from a federal level. For example, this report shows that HUD could apply its experience in administering affordable housing programs—including collecting physical inspection data, analyzing noncompliance trends, and reviewing fair housing issues—that could result in guidance to support the work done at the allocating agencies. IRS would still retain all responsibilities related to tax law enforcement. Further, while we did not make recommendations directly to the allocating agencies, our recommendations to IRS reflect concerns about some state practices that we observed, including the missing QAP items and the use of an automatic basis boost. NCSHA also noted that it encourages GAO and others to view the QAPs broadly as a collection of documents that also include other related publicly available documents and allocation practices by the agencies. We recognize that the details of each required preference and selection criterion may be described in more detail in other documents. However, the QAP is the sole document required by Section 42, and we maintain that the plans should be consistent in meeting federal requirements and transparent about an allocating agency’s practices for awarding credits to projects. Additionally, we note in our report that IRS does not regularly review QAPs, but in the few audits it has conducted of allocating agencies, IRS has identified findings related to the QAPs, such as missing preferences and selection criteria. For those audits, IRS recommended that the QAP document—not auxiliary documents— should be updated to address the identified deficiencies. Leveraging HUD in the oversight process could help ensure that QAPs are reviewed regularly and meet minimal federal requirements. Finally, NCSHA states that GAO seems to confuse the financial feasibility analysis with standards states may set for eligibility for the discretionary basis boost. We acknowledge in our report that allocating agencies conduct financial feasibility and other analyses to determine the appropriate amount of LIHTCs to award and describe the different methods we observed in the nine selected agencies. However, as noted in the report, we observed a range of practices for awarding discretionary basis boosts, including an automatic basis boost that is applicable to all LIHTC projects and could lead to fewer projects being subsidized. Further, because IRS does not regularly review QAPs that list criteria for discretionary basis boosts, IRS is unable to determine the prevalence of these types of policies among allocating agencies that could result in oversubsidizing projects. Furthermore, continuance of such policies could establish a precedent for other states to adopt. NCSHA wrote that nothing in Section 42 directs IRS to provide guidance about discretionary basis boosts. Although not explicit in Section 42, we maintain that federal agencies and state allocating agencies—acting as stewards of federal resources—have the responsibility to efficiently use such resources to the best of their ability, particularly in what NCSHA has accurately noted as a resource-constrained environment. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretaries of Housing and Urban Development, and Treasury; the Commissioner of Internal Revenue; the appropriate congressional committees; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-8678 or garciadiazd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. This report discusses how state and local allocating agencies administer the Low-Income Housing Tax Credit (LIHTC) program and any oversight issues the allocating agencies or the Internal Revenue Service (IRS) face in implementing the program. More specifically, this report describes how allocating agencies (1) award LIHTCs, (2) assess the reasonableness of development costs and financial feasibility of LIHTC properties, and (3) monitor LIHTC properties’ compliance with program requirements. For all three objectives, we conducted a structured analysis of 2013 Qualified Allocation Plans (QAP) to gather information about the practices of allocating agencies for awarding credits, assessing costs, and monitoring. The QAPs we reviewed were from all 50 states, the District of Columbia, Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, the U.S. Virgin Islands, and the cities of Chicago and New York City, for a total of 58 QAPs. For our analysis, which primarily focused on information in the QAPs themselves, we developed a Data Collection Instrument (DCI). To help determine what questions to include, we reviewed a small sample of plans to ascertain what types of information were available in QAPs and interviewed housing groups, academics, the National Council of State Housing Agencies (NCSHA), and officials from IRS and the Department of Housing and Urban Development (HUD). The DCI did not capture information from the agencies’ supplemental LIHTC materials, such as applications, manuals, and other documents. However, in an effort to present the most recent information available on certain practices, we also reviewed 2015 QAPs and other LIHTC documents at nine allocating agencies we visited (we discuss agency selection and the site visits below). The results of the DCI analysis provide insights into what information these plans include in relation to awarding credits, assessing costs, and monitoring compliance. In addition, we visited nine allocating agencies to observe the processes used to award tax credits, assess the reasonableness of development costs, and monitor compliance of properties. The nine agencies were in California; Chicago, Illinois; Illinois; Massachusetts; Michigan; Nevada; Rhode Island; Virginia; and Washington, D.C. We primarily considered the following four factors to select this nonprobability nongeneralizable sample: 2014 state population, which is used to determine the amount of LIHTCs available to each state annually; findings from HUD’s Office of the Inspector General and state auditors on LIHTC-related audits; selected information from our analysis of 2013 QAPs, such as types of scoring criteria used, limits to total development costs, and references to separate compliance monitoring guidelines; and selected information from NCSHA’s 2012 Factbook, such as the amount of credits requested and allocated in 2012 and whether the allocating agency contracted out compliance monitoring activities. We also considered variation in geographic location, information about program administration in press releases or media articles, perspectives from interviews with industry experts, and the presence of suballocating agencies within a state. While the results of the site visits cannot be generalized to all allocating agencies, they provided insight into the ways in which agencies implemented various LIHTC requirements. During our visits, we conducted a file review of a nongeneralizable set of projects at each allocating agency to collect information about agency practices as well as compliance with program requirements. We used a random sample method to select files based on the full list of applicants that were awarded tax credits in 2013, the full list of projects placed-in- service in 2013, and the full list of projects that were inspected in 2013 and any noncompliance issues found. We assessed the reliability of the databases that contained the information at each allocating agency by reviewing documentation (such as data dictionaries and database manuals) and interviewing the relevant officials responsible for administering and overseeing the databases. We determined the data were reliable for the purpose of selecting files for our review. For the file review, we also used a checklist to help ensure that we were capturing consistent and pertinent information from each file. For example, in developing the checklist, we reviewed Section 42 of the Internal Revenue Code (Code) as well as Department of the Treasury (Treasury) regulations to help ensure we could document relevant information that evidenced agency compliance with federal requirements. To describe how allocating agencies award LIHTCs, we reviewed the Code, Treasury regulations, and guidance. During our site visits, we interviewed agency officials for information on how the agencies develop and apply selection criteria in reviewing applications and awarding tax credits to developers. We also conducted file reviews at each of the selected agencies—for a total of six approved applications (or all approved applications, if less than six were selected in the 2013 allocation round)—to determine what information and documentation developers submitted with their applications, and how allocating agencies reviewed and scored the applications. Using the checklist, we reviewed how agencies met Code requirements for market studies, extended use agreements, and local government notifications. To identify any issues the IRS faces in overseeing allocating agencies awarding LIHTCs, we interviewed officials from IRS and Treasury to discuss agencies’ practices and any guidance issued. We also reviewed federal internal control standards to identify key activities that help ensure that compliance with applicable laws and regulations is achieved. We also interviewed officials from HUD’s Office of Fair Housing and Equal Opportunity to gain their perspective on how allocating agencies, through their QAPs and practice of awarding LIHTCs, can affect fair housing. To describe how allocating agencies assess the reasonableness of development costs and financial feasibility of LIHTC properties, we reviewed the Code, Treasury regulations and guidance, and best practices from NCSHA. We conducted interviews at the nine agencies to obtain perspectives on how the agencies assess the reasonableness of development costs and financial feasibility, including the types of cost limits that were established, how required cost certifications were documented, and how cost overruns were handled. We also conducted a file review at each of the agencies for three approved applications from 2013 and three developments that were placed-in-service in that year to determine how allocating agencies analyzed project feasibility and viability. Using the checklist, we reviewed the agencies’ determinations of credit amounts as well as how agencies met the Code requirement to determine credit amounts at three points in time (at application, allocation, and placed-in-service). To identify any issues the IRS faces in overseeing allocating agencies assessing the reasonableness of costs, we interviewed officials from IRS and Treasury about agencies’ practices for assessing the reasonableness of development costs and financial feasibility of LIHTC properties. We also reviewed federal internal control standards to identify key activities that help program managers achieve desired results through effective stewardship of public resources. We also interviewed HUD officials from the Office of Fair Housing and Equal Opportunity and from the Office of Multifamily Housing Programs to gain perspectives on development cost limits and the use of basis boosts in the LIHTC program. To evaluate how allocating agencies monitor LIHTC properties’ compliance with program requirements, we reviewed the Code, Treasury regulations, and IRS guidance that describe federal requirements for such monitoring. We also reviewed IRS documentation on its roles and responsibilities in overseeing allocating agencies and taxpayers. We conducted interviews at the nine agencies to obtain perspectives on how the agencies met Code requirements for physical inspections and file reviews, communicated inspection findings to property owners, and transmitted noncompliance findings to IRS using Form 8823. We conducted a file review at each of the agencies for six developments that were inspected in 2013 and reviewed any prior inspections and annual certifications the developments had on file. Using the checklist, we identified and reviewed the frequency of inspections, any noncompliance findings, and how they were resolved, as detailed in the files. To identify any issues the IRS faces in overseeing allocating agencies’ compliance monitoring of LIHTC properties, we reviewed IRS’s processes for identifying and conducting audits on taxpayers claiming LIHTCs and conducted a site visit to the IRS Low-Income Housing Credit Compliance Unit in Philadelphia, Pennsylvania, to observe how submitted forms were processed. We interviewed officials from IRS and Treasury about agencies’ practices for submitting Form 8823 and how IRS records information in its Low-Income Housing Credit database. We also reviewed federal internal control standards to identify key activities that help ensure that compliance with applicable laws and regulations is achieved. We interviewed HUD officials from the Real Estate Assessment Center (REAC) to discuss the databases they manage and their efforts to collect information on LIHTC properties, as well as officials from the Office of Policy Development and Research about how HUD uses the data it collects. Lastly, we interviewed HUD officials involved in the Rental Policy Working Group to obtain updates on the interagency effort to consolidate required physical inspections of subsidized rental housing. We conducted this performance audit from February 2014 to May 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above Andy Finkel (Assistant Director), Christine Ramos (Analyst-in-Charge), Jordan Anderson, Jessica Artis, William R. Chatlos, Max Glikman, Anar Jessani, Elizabeth Jimenez, Stuart Kaufmann, John McGrail, Marc Molino, Ruben Montes de Oca, Anna Maria Ortiz, Nadine Garrick Raidbard, Barbara Roesmann, and MaryLynn Sergent made major contributions to this report.
LIHTC encourages private-equity investment in low-income housing through tax credits. The program is administered by IRS and allocating agencies, which are typically state or local housing finance agencies established to meet affordable housing needs of their jurisdictions. Allocating agency responsibilities (in Section 42 of the Internal Revenue Code and regulations of the Department of the Treasury) encompass awarding credits, assessing reasonableness of project costs, and monitoring projects. GAO was asked to review allocating agencies' oversight of LIHTC. This report reviews how allocating agencies administer the LIHTC program and identifies any oversight issues. GAO reviewed regulations and guidance for allocating agencies; analyzed 58 allocation plans (from 50 states, the District of Columbia, U.S. territories, New York City, and Chicago); performed site visits and file reviews at nine selected allocating agencies; and interviewed IRS and HUD officials. This is a public version of a sensitive report that GAO issued in May 2016 and does not include details that IRS deemed tax law enforcement sensitive. Allocating agencies that administer the Low-Income Housing Tax Credit (LIHTC) program have certain flexibilities for implementing program requirements and the agencies have done so in various ways. Although GAO found that allocating agencies generally have processes to meet requirements for allocating credits, reviewing costs, and monitoring projects, some of these practices raised concerns: More than half of the qualified allocation plans (developed by 58 allocating agencies) that GAO analyzed did not explicitly mention all selection criteria and preferences that Section 42 of the Internal Revenue Code requires. Allocating agencies notified local governments about proposed projects as required, but some also required letters of support from local governments. The Department of Housing and Urban Development (HUD) has raised fair housing concerns about this practice, saying that local support requirements (such as letters) could have a discriminatory influence on the location of affordable housing. Allocating agencies can increase (boost) the eligible basis used to determine allocation amounts for certain buildings at their discretion. However, they are not required to document the justification for the increases. The criteria used to award boosts varied, with some allocating agencies allowing boosts for specific types of projects and one allowing boosts for all projects in its state. In a July 2015 report, GAO found that Internal Revenue Service (IRS) oversight of allocating agencies was minimal and recommended joint administration with HUD to more efficiently address oversight challenges. GAO's work for this review continues to show that IRS oversight remains minimal (particularly in reviewing allocation plans and practices for awarding discretionary basis boosts) and that action is still warranted to address GAO's prior recommendation. In this report, GAO also identified the following issues related to managing noncompliance information from allocating agencies: IRS provides discretion to allocating agencies for reporting noncompliance data, and has not provided feedback about data submissions. Consequently, allocating agencies have been inconsistently reporting these data to IRS. IRS has not used the information that it receives from allocating agencies to identify trends in noncompliance. GAO's analysis shows that IRS had recorded only about 2 percent of the noncompliance information it received since 2009 in its database. IRS has not used key information when determining whether to initiate an audit, potentially missing opportunities to initiate LIHTC-related audits. In contrast, HUD collects and analyzes housing data, and through a Rental Policy Working Group initiative, now adds LIHTC inspection results to its database. The IRS division responsible for LIHTC was unaware of this effort and is not involved with the working group. By participating in the working group, IRS could leverage HUD data to better understand the prevalence of noncompliance in LIHTC properties and determine whether to initiate audits. GAO recommends that IRS clarify when agencies should report noncompliance and participate in the Rental Policy Working Group to assess the use of HUD's database to strengthen IRS oversight. IRS agreed it should improve its noncompliance data, but also stated that it had to consider resource constraints. HUD supported using its expertise and experience administering housing programs to improve LIHTC.
AGOA is a trade preference program that provides eligible sub-Saharan African countries duty-free access to U.S. markets for more than 6,000 dutiable items in the U.S. import tariff schedules. AGOA also includes goals related to U.S. government technical assistance in sub-Saharan Africa. Countries must meet certain eligibility criteria to take advantage of AGOA preferences, and the program had 41 such eligible countries as of December 1, 2014. AGOA legislation directs the President to target technical assistance to serve specific TCB-related goals that promote economic reform and development, and to develop and implement certain policies aimed at encouraging investment in sub-Saharan Africa. With regard to technical assistance, AGOA directs the President to focus such assistance on the following goals: 1. Develop relationships between U.S. and sub-Saharan African firms through business associations and networks. 2. Provide assistance to the governments of sub-Saharan African bringing legal regimes into compliance with the standards of the liberalizing trade and promoting exports, making financial and fiscal reforms, and promoting greater agribusiness linkages. 3. Address critical agriculture policy issues such as market liberalization, agriculture export development, and agribusiness investment in processing and transporting agriculture commodities. 4. Increase the number of reverse trade missions to growth-oriented countries in sub-Saharan Africa.5. Increase trade in services. 6. Encourage greater sub-Saharan African participation in future negotiations in the World Trade Organization (WTO) on services and making further commitments to encourage the removal of tariff and nontariff barriers. Trade in services refers to the buying and selling of intangible products and activities; examples of trade-in-services sectors include tourism, financial services, and telecommunications. See GAO, Sub-Saharan Africa: Trends in U.S. and Chinese Economic Engagement, GAO-13-199 (Washington, D.C.: Feb. 7, 2013). manufacturing sectors, including problems with cost and quality of inputs, access to finance, trade logistics such as the high cost of transporting goods, and inadequate workforce skills. The International Finance Corporation, a member of the World Bank Group, has reported that less than a quarter of adults in sub-Saharan Africa have access to formal financial services, and lack of access to finance is a constraint to economic growth overall, and the growth of small and medium-sized enterprises in the region. Another study found that AGOA apparel production is concentrated in low-skill tasks with little knowledge transfer to local workers, and that the global competitiveness of AGOA exporters still depends on the preferences they receive under AGOA. Many AGOA countries lack the capacity to produce and export goods in the necessary quantity and at the quality U.S. markets require. This same challenge may also affect potential investors’ decisions about engaging in Africa. Literature on AGOA and TCB has also shown that poor infrastructure conditions in sub-Saharan Africa remain a key challenge that undermines export competitiveness. In 2014, USITC reported that weak transportation infrastructure, including poor rural roads, inefficient port facilities, and burdensome customs procedures are among the impediments to export growth and competitiveness for sub-Saharan Africa. The report noted that a number of factors directly affect the cost and timeliness of delivery of goods to the U.S. market, including distance to market, perishability of products, freight rates, and reliability of trade linkages. Since at least 2001, the United States has provided TCB assistance to developing countries to help them participate in and benefit from global trade. U.S agencies generally define TCB broadly to include all types of development assistance that enhance a country’s ability to secure benefits from international trade. Among other things, such assistance can address (1) the regulatory environment for business, trade, and investment; (2) constraints such as low capacity for production and entrepreneurship; and (3) inadequate physical infrastructure, such as poor transport and storage facilities. USAID collects data to identify and quantify the U.S. government’s TCB activities in developing countries through an annual survey of U.S. agencies and maintains the survey results in the U.S. government’s publicly available online TCB database.This database of TCB funding defines 14 categories of TCB assistance provided by the U.S. government (see app. II for a detailed list of TCB category definitions and examples of related activities). The majority of U.S. TCB funding for AGOA countries from 2001 through 2013 was provided for three categories of activities: trade-related infrastructure, trade-related agriculture, and trade facilitation (see fig. 1). Total U.S. government funding for TCB assistance for AGOA countries from 2001 to 2013 was approximately $5 billion. In that time period, U.S. government TCB assistance for AGOA countries peaked in 2008 and declined sharply in 2012 (see fig. 2). The U.S. government provided funding for TCB assistance from 2001 through 2013 for all 41 AGOA countries. Sixty-eight percent of all U.S. government TCB funding obligated for AGOA countries from 2001 through 2013 was for 10 of these countries (see table 1). Although the President affirmed the U.S. government’s commitment to providing TCB assistance for AGOA countries in August 2014, no single agency is responsible. According to our analysis of the U.S. government’s TCB database, MCC and USAID are the agencies that reported providing the most funding for AGOA countries, and accounted for 90 percent of all TCB assistance to these countries from 2001 through 2013 (see fig. 3). While USAID funds activities that have clear and direct links to TCB, MCC funds activities that may be more indirectly related to international trade. MCC conducts TCB-related activities that support its broader strategic and agency goals. In contrast, one of USAID’s core development objectives is to promote sustainable, broad-based economic growth by helping developing countries increase their exports through trade capacity building. USAID aims to achieve its TCB goal by supporting participation in trade negotiations, implementation of trade agreements, and economic responsiveness to trade opportunities. USAID also collects data to identify and quantify the U.S. government’s TCB activities through an annual survey of U.S. agencies and maintains the survey results in the U.S. government’s publicly available online TCB database. MCC’s TCB-related activities in sub-Saharan Africa are supportive of AGOA. MCC identifies a relationship between AGOA and the agency’s role in improving economic growth, including through its trade-related infrastructure activities in selected sub-Saharan African countries. According to agency officials, MCC’s focus on economic growth and encouraging private sector investment is in line with the goals of AGOA. Furthermore, agency officials said that MCC infrastructure-related investments have included a number of projects that support global trade in sub-Saharan Africa. From 2005 through 2013, MCC funded TCB activities in 15 of the 41 AGOA countries (see table 2). MCC’s TCB funding for AGOA countries has supported a range of TCB activities, largely focused on trade-related infrastructure. MCC’s TCB assistance in AGOA countries has covered 10 of the 14 TCB categories, with the majority of funding, over 75 percent, concentrated on trade- related infrastructure (see fig. 4). MCC’s trade-related infrastructure projects in AGOA countries cover a range of activities including building roads, improving ports, and expanding access to electricity. For example, MCC compacts in Mozambique and Malawi include large infrastructure components, as described below: Mozambique. MCC signed a compact with Mozambique in 2007 for about $506.9 million, of which about $222 million was obligated for TCB-related activities, mostly concentrated on trade-related infrastructure. This compact included $176 million in trade-related infrastructure assistance for a roads project rehabilitating 491 kilometers of key segments of the country’s transportation network. The project aimed to improve access to markets, resources, and services; reduce transport costs for the private sector; and expand connectivity across the region. Malawi. MCC signed a compact with Malawi in 2011 for $350.7 million, and data show that the entire amount was obligated for trade- related infrastructure activities. Specifically, the compact is a single- sector power revitalization project that aims to increase the capacity and stability of the national electricity grid and bolster the efficiency and sustainability of hydropower generation. Officials we spoke to in Ghana and Ethiopia, the two AGOA countries where we conducted fieldwork, highlighted a range of ongoing infrastructure improvements and challenges. Business representatives in Ghana, where MCC funded $240 million in TCB-related assistance, noted that U.S. TCB activities had helped to reduce problems with land transportation. In Ethiopia, a representative from local business noted that infrastructure challenges had been diminished through improvements in transportation, which had reduced costs for importing and exporting goods. Officials and local business representatives in both Ethiopia and Ghana also cited a range of ongoing infrastructure challenges that acted as an impediment to conducting business. For example, in Ethiopia, officials cited infrastructure issues, among others, as an impediment to conducting business in the country, and representatives of local businesses noted that further investment was needed in services such as power, roads, and telecommunications. In addition, officials in Ghana stated that port congestion caused delays, and the manufacturing sector was diminished partly because of a lack of access to reliable power. A partnership among the U.S. government, African governments, the private sector, and others, Power Africa aims to expand access to electricity to households and businesses and increase Africa’s global competitiveness. governments to increase internal and regional trade within Africa, and expand trade and economic ties among Africa, the United States, and other global markets. USAID’s TCB funding has supported a range of TCB activities for AGOA countries, with trade-related agriculture and trade facilitation being the two largest categories. USAID has funded TCB assistance activities in 39 of 41 AGOA countries; see table 3 for AGOA countries with the highest USAID TCB funding. USAID’s TCB assistance activities in AGOA countries covers all 14 TCB categories, with the majority of funding, over 75 percent, concentrated on trade-related agriculture, trade facilitation, and trade-related infrastructure (see fig. 5). From 2002 to 2004, USAID established three regional trade hubs in sub- Saharan Africa that serve as primary implementers of U.S. TCB These USAID- assistance for sub-Saharan African countries (see fig. 6).funded trade hubs are staffed with regional advisers who provide a range of services to U.S. agencies, African governments, and the private sector, noted as follows: East Africa trade hub, established in Nairobi, Kenya, in 2002. This hub aims to increase food security and economic growth in the following 9 East or Central African countries: Burundi, Ethiopia, Kenya, Madagascar, Mauritius, Rwanda, South Sudan, Tanzania, and Uganda. West Africa trade hub, established in Accra, Ghana, in 2003. This hub focuses on addressing critical issues that hamper export competitiveness such as high transport and telecommunications costs, limited access to finance, and inconsistent implementation of regional trade policies in 20 West African countries: Benin, Burkina Faso, Cameroon, Cape Verde, Chad, Côte d’Ivoire, Gabon, The Gambia, Ghana, Guinea, Guinea-Bissau, Liberia, Mali, Mauritania, Niger, Nigeria, São Tomé and Príncipe, Senegal, Sierra Leone, and Togo. Southern Africa trade hub, established in Gaborone, Botswana, in 2004. This hub’s primary goals are to increase international competitiveness, as well as intraregional trade and food security, by promoting greater competitiveness in agriculture value chains, increasing investment and export opportunities in the textile and apparel sector, and supporting a better business-enabling environment in 8 Southern African countries: Botswana, Lesotho, Malawi, Mozambique, Namibia, South Africa, Swaziland, and Zambia. Along with implementing activities to support U.S. initiatives in areas such as food security, USAID-funded trade hubs seek to support trade facilitation, market linkages, and information awareness about AGOA to AGOA-exporting firms and countries. For example, from 2007 through 2012, USAID provided funding for activities implemented through the West Africa trade hub to address economy-wide constraints such as the transport and trade barriers affecting the region’s ports, corridors, and borders. The trade hub established an advocacy campaign to address such trade barriers and help decrease the costs associated with trading. The trade hub also worked with governments in the region to establish border information centers that help stakeholders coordinate, and provide information and assistance to traders at borders to ease transport bottlenecks. The trade hub in East Africa has helped subsidize the cost to exporters of attending trade shows to gain exposure to U.S. markets in sectors including leather goods and apparel, and has facilitated U.S. buyers going to sub-Saharan Africa. Among its trade-related agriculture activities, the Southern Africa trade hub has provided training to medium- and large-scale commodity buyers and storage operators trading in maize and soybeans to help reduce postharvest loss and improve procurement practices. Officials we spoke to in Ethiopia and Ghana cited some improvement in areas where USAID has provided TCB assistance while highlighting other ongoing challenges related to facilitating exports under AGOA. Although the West Africa trade hub began efforts in 2009 to help facilitate financial services for local companies, local business representatives from the cashew and shea industries in Ghana said lack of access to finance and the business community’s lack of awareness on how to use AGOA remain challenges to utilizing AGOA. A representative of the horticulture industry in Ethiopia cited inefficient customs processes and lack of access to finance in the country as challenges to more fully utilizing AGOA. He also said that while certain logistical challenges had been addressed in terms of direct airline routes to the United States, increasing awareness of the Ethiopian flower industry would help improve access to the U.S. market. The owner of a textile goods company who had exported products under AGOA said he was unable to obtain certain inputs for his products in Ethiopia, a fact that affected decisions on what to produce. Furthermore, he said local businesses were rudimentary when AGOA was signed, and are only now building export capacity and an understanding of the U.S. market. A business representative from the apparel industry said that logistics remain a challenge to exports because of high transportation costs that may discourage potential buyers. He noted the high cost of moving shipments from Ethiopia to the port in Djibouti, and also that lengthy transport schedules create longer lead times to fill orders. Like other members of the private sector we spoke to, he said that local companies have limited access to capital, and that obtaining financing requires a number of bureaucratic steps. USAID works with some host governments to develop strategic approaches to increasing AGOA utilization. As previously noted, one of USAID’s core development objectives is to promote sustainable, broad- based economic growth by helping developing countries increase their exports through trade capacity building. AGOA legislation also directs the President, in part, to target assistance to sub-Saharan African governments. USAID has identified trade hubs as primary implementers of TCB assistance to African governments and organizations, among others. USAID, partly through the trade hubs, has supported AGOA utilization by collaborating with African governments to develop AGOA- specific or national export strategies. In the strategy documents, host governments may identify high-priority trade and investment sectors, constraints related to AGOA utilization, and specific steps to increase exports under AGOA. For example, the East Africa trade hub participated in a 2013 workshop with officials from the Mauritian government, and helped the host government develop and publish its AGOA-specific national strategy, which aims to support the ability of Mauritian firms to sell to the U.S. market and leverage opportunities that AGOA provides. Data from USAID also indicate that trade hubs provided input toward strategies that the Gambia and Senegal have developed. We previously identified the importance of strategic planning efforts in results-oriented management. Specifically, we found that such strategic planning efforts are the starting point and foundation for defining what the organization seeks to accomplish, and in identifying the strategies it will use to achieve desired results. Furthermore, developing a strategic plan can help clarify organizational priorities and unify staff in the pursuit of shared goals. If done well, strategic planning fosters informed communication between the organization and its stakeholders. In the case of AGOA utilization, this may include collaboration between U.S. and host governments, and the private sector. Literature and trade hub reports have noted the potentially positive effects such strategies can have on countries’ utilization of AGOA. USAID, through its trade hubs, has stated that identifying strategic needs and priorities through national strategies can bolster AGOA utilization. For example, in a 2013 report prepared for USAID, the West Africa trade hub noted the importance of a strategy as part of leveraging trade preferences, and the role that USAID and other U.S. agencies can play in encouraging strategy development. countries, including Burkina Faso and Sierra Leone, that have implemented strategies as tools to better utilize AGOA. Similarly, the East Africa trade hub reported that national strategies reflect host governments’ strategic needs in approaching the U.S. market and outline ways governments can utilize AGOA. According to contractors who implement activities at one of the trade hubs, export strategies allow governments to target specific sectors and work with the private sector toward a unified approach. CARANA Corporation, West Africa Trade Hub Final Report, a report prepared at the request of USAID, August 2013. African leaders have also articulated the importance of strategic approaches to enhancing AGOA utilization. At the 2011 AGOA Forum held in Zambia, an African trade minister underscored the importance of clear AGOA national strategies because they help ensure that countries assess export promotion challenges in a coordinated manner, and U.S. agency officials said that African leaders had committed to developing more AGOA-related strategies at the August 2014 Africa Leaders Summit. Furthermore, in a January 2014 testimony to USITC, a senior African official said AGOA countries have recognized the need to address various supply-side constraints that have hindered AGOA utilization, including poor infrastructure, by developing a coordinated, strategic response at the national level. This official also noted that this strategic exercise would enable AGOA countries to identify supply-side constraints and potential responses, and may ultimately enable the U.S. government to better support African countries. For example, the Ethiopian government has drafted a national strategy that identifies high-priority industries that align with AGOA trade preferences. While this document is still in draft form, the Ethiopian trade ministry notes that its AGOA national strategy is an important part of the country’s overall growth plan, given that AGOA is a useful market opportunity to achieve Ethiopia’s larger economic growth objectives. According to officials, the government also plans to establish an AGOA center to oversee implementation of the strategy. Though USAID has made efforts to work with host governments on developing strategic approaches to AGOA utilization, 14 out of the 41 current AGOA countries have such strategies in place, according to data from USAID (see fig. 7). According to a white paper from the United Nations Economic Commission for Africa and the African Union, the lack of a strategic approach on AGOA is a significant reason for gaps in AGOA utilization. A 2011 Brookings Institution report identified the lack of an AGOA national strategy as one factor inhibiting Ghana from fully benefitting from AGOA. According to officials and information from trade hubs, AGOA countries may lack these strategies because such efforts have not been prioritized in work plans, and because of an absence of political will among host governments. Specifically, in its work plans for all three trade hub contracts, USAID has noted the importance of coordinating with bilateral USAID missions, regional entities in sub-Saharan Africa, and host governments, among others. However, USAID only included the development of national strategies as a high-priority task for the East Africa trade hub, and not for the West and Southern Africa trade hubs. Furthermore, a lack of host government interest could influence the effectiveness of such efforts. A West Africa trade hub report noted that political will is needed to sustain strategy development efforts in those AGOA countries that lack such strategies. USAID officials also said that host governments must request and initiate the process of developing these strategies, and the lack of political will to motivate these efforts may be one reason some AGOA countries do not have such a strategic approach. For example, according to literature, some USAID TCB assistance programs in sub-Saharan Africa have faced challenges in gaining buy-in from regional participating governments and in ensuring agreement on the direction and pace of adoption of relevant processes and procedures. USAID officials acknowledged they could do more to work with host governments on strategy development to enhance AGOA utilization, and officials said they are starting to work with regional entities to develop strategic approaches to export promotion. The U.S. government has acknowledged the importance of providing TCB assistance in support of AGOA, and U.S. agencies have obligated approximately $5 billion in TCB assistance for AGOA countries over a 13- year period. As Congress deliberates reauthorization of the AGOA program, policymakers have expressed interest in enhancing eligible countries’ ability to utilize the program and ensuring that TCB assistance is aligned with the program’s objectives. A strategic approach to AGOA utilization can help eligible countries leverage U.S. TCB efforts and trade preferences under AGOA, while a lack of a strategic approach to AGOA can result in gaps in program utilization. Although USAID has worked with some host governments from AGOA countries to develop strategic approaches to program utilization, about a third of the 41 AGOA countries currently have strategies that reflect AGOA priorities. USAID has not prioritized the development of these strategies for all three of its regional trade hubs, which play a significant role in implementing TCB in AGOA countries and working with host governments. A lack of political will among host governments may also pose challenges to developing and sustaining strategic approaches related to AGOA. In developing these approaches, eligible countries can identify trade barriers that inhibit AGOA utilization and articulate a commitment to addressing these barriers. Such strategies could also assist U.S. agencies in ensuring that TCB assistance is aligned with host government priorities and is addressing gaps in AGOA utilization. To enhance eligible countries’ ability to utilize the AGOA program and ensure that TCB assistance is aligned with program objectives, we recommend that the Administrator of USAID work with more host governments to develop strategic approaches to promoting exports under AGOA. We received written comments on a draft of this report from USAID, which are reprinted in appendix III. USAID stated that it agreed with the report’s overall findings, conclusions, and recommendations. USAID also made a number of observations and comments related to the findings and recommendation in the report. USAID commented that our report does not provide sufficient data to demonstrate the linkage between host government strategic approaches and AGOA utilization. However, as we point out in our report, such strategies can have potentially positive effects on countries’ utilization of AGOA. We cite prior GAO work that notes the importance of strategic planning efforts in results-oriented management; and literature, trade hub reports, and statements from African leaders that also emphasize the importance of strategic approaches to enhancing AGOA utilization. USAID stated that our report does not include the point that the productivity of African businesses is negatively impacted by a lack of access to reliable electricity. However, our report does in fact note observations from our field work in Ghana and Ethiopia regarding challenges resulting from lack of access to power. Finally, USAID explained that its trade hubs are designed as regional programs and therefore often prioritize regional efforts over bilateral strategy development. In our report we acknowledge the regional focus of USAID-funded trade hubs and also note that USAID is starting to work with regional entities to develop strategic approaches to export promotion. Commerce, State, the Treasury, MCC, USITC, and USTR also received a draft copy of the report but did not provide formal comments. USAID, USITC, and USTR provided technical comments, which we have incorporated in the report, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretaries of Commerce, State, and the Treasury; the Chief Executive Officer of MCC; the Administrator of USAID; the Chairman of USITC; the U.S. Trade Representative; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8612 or GianopoulosK@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Our objectives were to examine (1) U.S. government trade capacity building (TCB) assistance in support of the African Growth and Opportunity Act (AGOA), and (2) the extent to which the U.S. Agency for International Development (USAID) has made efforts to develop strategic approaches to AGOA utilization. To address both objectives, we interviewed officials from the Departments of Commerce, State, and the Treasury; the Millennium Challenge Corporation (MCC); and USAID, but focused on MCC and USAID for the purposes of this report because these agencies obligated the highest amounts of TCB funding from fiscal years 2001 through 2013. We also interviewed officials from the Office of the U.S. Trade Representative and the U.S. International Trade Commission (USITC), agencies that do not provide funding for U.S. TCB assistance but provided additional contextual information on AGOA and TCB. We reviewed documents including literature on AGOA and TCB; statements of work, evaluations, and annual reports for the three USAID-funded trade hubs; program documents for MCC activities in sub-Saharan Africa; and examples of AGOA-specific and national export strategies. We also conducted fieldwork in Ethiopia and Ghana, countries we selected because they represented a cross section of U.S. TCB assistance and are in different regions within sub-Saharan Africa, thereby also providing insight on two out of the three trade hubs. In each country, we interviewed U.S. agency officials, host government officials, representatives from the private sector who had insights on U.S. TCB assistance, and contractors implementing TCB activities. Our findings from these countries are not generalizable to the universe of all U.S. TCB activities. To examine U.S. government TCB assistance in support of AGOA, we reviewed documents from relevant U.S. agencies, including program descriptions and evaluations, and analyzed data on U.S. TCB funding to AGOA countries. We focused our analysis on the U.S. agencies that provided the highest amounts of TCB funding for AGOA countries from fiscal years 2001 through 2013. We analyzed data USAID provided on annual U.S. TCB obligations for activities in all AGOA countries from fiscal years 2001 through 2013 by year, agency, country, and TCB category. These data are reported in the U.S. government TCB database, but we requested data directly from USAID to facilitate our analysis of the data for the purposes of this report. We also relied on the data and information from the TCB database, such as TCB activity descriptions. In our analysis of TCB funding data, we built upon information collected for prior GAO reports on TCB that used data from the TCB database. Data from the TCB database were deemed reliable for our prior reports on TCB. For this report, we determined that the data were sufficiently reliable to identify TCB funding by agency, country, category, and year. Furthermore, in assessing the data, we interviewed key USAID officials responsible for administering the database and reviewed supporting documentation. To examine the extent to which USAID has made efforts to develop strategic approaches to AGOA utilization, we reviewed documents from relevant U.S. agencies, including program descriptions and evaluations, and information on AGOA-specific and national export strategies from U.S. agencies and host governments. In addition, we discussed the development of these strategic approaches with U.S. and foreign government officials. We conducted this performance audit from March 2014 to January 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As we noted in 2011, the U.S. Agency for International Development (USAID) collects data to identify and quantify the U.S. government’s trade capacity building (TCB) activities in developing countries through an annual survey of U.S. agencies. The U.S. government TCB database defines the categories as follows: World Trade Organization (WTO) Accession and Compliance: support for countries to benefit from membership in the WTO, or to understand fully the benefits of membership. Also assistance to help countries in the WTO accession process meet the requirements of accession. This category includes assistance to meet the obligations of the specific WTO agreements, except for Agreements on Sanitary and Phyto-Sanitary Measures (SPS), Technical Barriers to Trade (TBT), Intellectual Property Rights (IPR), and Trade-related Procurement. Those four agreements benefit from TCB in their own categories. Sanitary and Phyto-Sanitary Measures: support for countries to meet SPS standards for trade and to comply with the WTO Agreement on SPS. Technical Barriers to Trade: support for countries to reduce technical barriers to trade and to comply with the WTO Agreement on TBT. Intellectual Property Rights: support for countries to observe international standards on intellectual property rights protection and to comply with the WTO Agreement on IPR. Trade-Related Procurement: support for increased trade related to government procurement and to comply with the WTO Agreement on Government Procurement. Trade Facilitation: generally defined as assistance in lowering the costs of engaging in, or eliminating obstacles to, international trade flows. Trade facilitation (for 2011) is a sum of the following four subcategories: Customs Operations: includes assistance to help countries modernize and improve their customs offices. Trade Promotion: includes assistance to increase market opportunities for developing country and transition economy producers. Enterprise Development: includes support to improve the associations and networks in the business sector, as well as to enhance the skills of business people engaged in trade. Also includes assistance to help countries acquire and use information technology to promote trade by creating business networks and disseminating market information. Free Trade Agreements (FTA) and Trade Integration: includes assistance to an FTA, a regional trade agreement (RTA), or an individual country that increases the ability of the RTA to facilitate trade. It can also include assistance to a potential member of an RTA that improves the analytical capacity of the country’s government with respect to RTA issues. Trade-Related Labor: assistance to support the enforcement of labor standards and worker rights, development of trade unions and dispute resolution mechanisms, strategies for workforce development and worker training, and the elimination of child labor. Financial Sector: support for financial sector work, monetary and fiscal policy, exchange rates, commodity markets, and capital markets. Trade-Related Infrastructure: assistance to establish trade-related telecoms, transport, ports, airports, power, water, and industrial zones. Environmental Sector Trade and Standards: assistance to establish environmental standards or to promote environmental technology. Competition Policy, Business Environment, and Governance: support for the design and implementation of antitrust laws, as well as of laws and regulations related to investment and investor protections. Includes support for legal and institutional reform to improve governance and make policies more transparent, and assistance to help the different agencies of a host country government function more effectively in the trade policy arena. Trade-Related Agriculture: support for trade-related aspects of the agriculture and agribusiness sectors. Trade-Related Services: includes support to help developing countries and transition economies increase their flows of trade in services. Services Trade Development is a sum of two subcategories: Trade-Related Services (excluding tourism): assistance to help countries develop trade in services in all sectors other than tourism, including financial services, energy, transportation, and education. Trade-Related Tourism: assistance to help countries expand their international tourism sectors, including eco-tourism. Other Trade Capacity Building: A small number of TCB activities did not fit in any of the above categories, including some activities of a crosscutting nature. These were categorized as “Other Trade Capacity Building.” In addition to the contact listed above, Juan Gobel (Assistant Director), Diana Blumenfeld, Farhanaz Kermalli, Farahnaaz Khakoo-Mausel, and Ben Sclafani made key contributions to this report. Godwin Agbara, Debbie Chung, Qahira El’Amin, Etana Finkler, Ernie Jackson, and Jill Lacey provided additional assistance. Foreign Assistance: USAID Should Update Its Trade Capacity Building Strategy. GAO-14-602. Washington, D.C.: Sept. 10, 2014. African Growth and Opportunity Act: Observations on Competitiveness and Diversification of U.S. Imports from Beneficiary Countries. GAO-14-722R. Washington, D.C.: July 21, 2014. Sub-Saharan Africa: Trends in U.S. and Chinese Economic Engagement. GAO-13-199. Washington, D.C.: Feb. 7, 2013. Foreign Assistance: The United States Provides Wide-ranging Trade Capacity Building Assistance, but Better Reporting and Evaluation Are Needed. GAO-11-727. Washington, D.C.: July 29, 2011. U.S.-Africa Trade: Options for Congressional Consideration to Improve Textile and Apparel Sector Competitiveness under the African Growth and Opportunity Act. GAO-09-916. Washington, D.C.: Aug. 12, 2009. International Trade: U.S. Trade Preference Programs: An Overview of Use by Beneficiaries and U.S. Administrative Reviews. GAO-07-1209. Washington, D.C.: Sept. 27, 2007. Foreign Assistance: U.S. Trade Capacity Building Extensive, but Its Effectiveness Has Yet to Be Evaluated. GAO-05-150. Washington, D.C.: Feb. 11, 2005.
Signed in 2000, AGOA directs the President to provide TCB assistance to sub-Saharan African governments and firms to promote exports and develop infrastructure, among other things. AGOA provides duty-free access on qualifying U.S. imports from eligible sub-Saharan African countries, a total of 41 countries as of December 1, 2014. From 2001 through 2013, U.S. agencies funded about $5 billion in TCB assistance to AGOA countries. GAO was asked to review various issues related to the ability of AGOA countries to utilize AGOA prior to its expiration on September 30, 2015. In this report, GAO examines (1) U.S. government TCB assistance in support of AGOA, and (2) the extent to which USAID has made efforts to develop strategic approaches to AGOA utilization. GAO focused on MCC and USAID because these two agencies accounted for nearly 90 percent of funding for TCB activities in AGOA countries from 2001 through 2013. GAO analyzed data on U.S. TCB assistance to AGOA countries in this period, reviewed agencies' funding and program documents, conducted interviews with officials who implement U.S. TCB assistance, and met with U.S. and foreign government officials and private sector representatives in Ethiopia and Ghana. Among U.S. agencies, the Millennium Challenge Corporation (MCC) and the U.S. Agency for International Development (USAID) have funded the majority of trade capacity building (TCB) assistance in support of the African Growth and Opportunity Act (AGOA) (see figure). MCC obligated nearly $3 billion in funding for TCB activities in 15 of the 41 countries eligible for AGOA (AGOA countries), with the majority of funds provided for trade-related infrastructure projects. For example, MCC obligated $176 million for a roads project in Mozambique that aimed to improve the transportation network, including access to markets and reduction of transport costs. USAID obligated approximately $1.6 billion in funding for TCB activities in 39 of the 41 AGOA countries, with the majority of funds provided for trade-related agriculture and infrastructure, and trade facilitation. For example, USAID funded activities to help exporters in East Africa build business linkages with U.S. markets through trade shows. Note: Funding amounts or percentages may not sum to totals because of rounding. USAID has worked with some host governments to develop strategic approaches to AGOA utilization; however, most host governments have not established such approaches. USAID-funded regional trade hubs in sub-Saharan Africa have supported AGOA utilization by, among other things, collaborating with some host governments to develop AGOA-specific or broader national export strategies. Trade hub evaluations and statements from host government officials show that identifying strategic needs and priorities through strategic approaches can bolster AGOA utilization and help assess challenges to expanding exports. In strategy documents, host governments may identify high-priority trade and investment sectors, constraints related to AGOA utilization, and specific steps to increase exports under AGOA. Lack of a strategic approach has been identified as a significant reason for gaps in AGOA utilization. As of December 2014, 14 of the 41 AGOA countries had strategies reflecting AGOA priorities. According to USAID officials, host governments must initiate the process of developing a strategy, and a lack of political will may pose challenges to such efforts. GAO recommends that the Administrator of USAID work with more host governments to develop strategic approaches to promoting exports under AGOA. USAID agreed with the recommendation.
Traditionally, DOD’s combat aircraft have used on-board electronic warfare devices called jammers for self-protection against radar-controlled weapons, including missiles and anti-aircraft artillery. These jammers emit electronic signals from the aircraft to try to impede or deny the threat radar’s ability to locate the aircraft. DOD’s existing self-protection jamming systems for its tactical aircraft have limitations against certain threats, and these threats are expected to be improved. DOD has modified existing systems, such as the Air Force’s ALQ-131 used on the F-16 and the ALQ-135 on the F-15, and has developed a newer system, the Navy’s Airborne Self-Protection Jammer (ASPJ), which is being used on some F-14D and F/A-18C/D aircraft. As we have previously reported, however, testing after deployment has shown that the modified jammer systems have had problems, while operational testing of ASPJ and other jammers showed they were unable to meet effectiveness criteria against certain classified threats. In an attempt to overcome the limitations of the on-board jammers, the services are acquiring two new towed decoy systems, the ALE-50 and the RFCM, to enhance survivability against the radar-controlled threats. The ALE-50 towed decoy system is in production, while the future RFCM system is in development. The ALE-50’s towed decoy component generates and emits its own signals that are intended to lure an incoming radar-guided weapon away from the aircraft by presenting a more attractive target. To provide further improvement for selected Air Force and Navy aircraft, the RFCM is to provide more sophisticated techniques than the ALE-50. A jamming device called the techniques generator carried onboard the aircraft produces jamming signals that are transmitted by fiber optic cable to the RFCM decoy for transmission. Both decoys are single use systems. Once deployed from the aircraft, the decoy’s tow line is severed prior to return to base. Each aircraft is to carry multiple decoys, so if one is destroyed by enemy fire or malfunctions, another can be deployed. Therefore, substantial inventories of decoys are required to sustain potential combat operations. The services expect that these decoys will improve survivability of their aircraft against radar-controlled threats compared to the current technique of emitting the jamming signals directly from the aircraft. Classified test results show that the ALE-50 towed decoy offers improved effectiveness against radar-controlled threats, including some threat systems against which self-protection jammers have shown little to no effectiveness. Moreover, the future RFCM decoy system is expected to further improve survivability due to its more sophisticated jamming techniques. Recognizing the potential offered by these towed decoy systems to overcome the limitations of using just on-board jammers, such as the ASPJ, the Air Force is actively pursuing the use of towed decoys for its current aircraft. It has done the necessary modifications to add the ALE-50 to the F-16, an aircraft slightly smaller than the Navy’s F/A-18C/D, and to the B-1, a much larger aircraft. The Air Force is also considering use of the RFCM decoy system on the F-15, which will use its existing on-board jammer instead of the techniques generator, and on the B-1, as well as several other aircraft. The Navy plans to equip only its future F/A-18E/F aircraft with a decoy system. The ALE-50 decoy system is to be used by the Air Force on 437 F-16 and 95 B-1 aircraft. In addition to the ALE-50 components such as the launcher and controller installed on the aircraft, the Air Force plans to procure 17,306 ALE-50 decoys to meet operational requirements. The Navy plans to buy 466 ALE-50 decoys. These will be used for F/A-18E/F testing and contingencies after the aircraft’s deployment until the RFCM decoy is available. The ALE-50 program cost is estimated at about $1.2 billion. The Navy’s estimated RFCM cost for its F/A-18E/F aircraft is about $2.6 billion. The Navy’s plan is to procure enough RFCM systems and spares to equip and support 600 of its planned buy of 1,000 F/A-18E/F aircraft. For 600 F/A-18E/F aircraft, the number of decoys to be procured to meet operational needs is 18,000. (These estimates predate the May 1997 decision of the Quadrennial Defense Review (QDR) to recommend a reduction in the number of F/A-18E/Fs.) The future RFCM decoy system is also being considered by the Air Force for its B-1 aircraft, part of its F-15 fleet, and several other Air Force manned and unmanned aircraft. If the Air Force buys the RFCM system for the B-1 and the F-15, which would use its existing onboard jammer instead of the RFCM techniques generator, the estimated cost, including 9,107 decoys, is about $574 million. In contrast with the Air Force, which intends to use decoys to improve the survivability of its current aircraft, current Navy combat aircraft will be at a comparative survivability disadvantage since they will not be provided with a decoy system. In particular, because F/A-18E/Fs will not be replacing all of the C/D models in the Navy/Marine Corps inventory in the foreseeable future, adding a towed decoy system to the F/A-18C/D potentially offers the opportunity to save additional aircraft and aircrew’s lives in the event of hostilities. In the year 2010, more than 600 of the Navy’s tactical fighter inventory objective of 1,263 aircraft will still be current generation fighters such as the F/A-18C/D. This will be true even if F/A-18E/Fs are procured at the Navy’s desired rates of as high as 60 per year. At the post-QDR suggested rate of 48 per year, almost 50 percent of the current generation aircraft will still be in the fleet in the year 2012. DOD and the Navy have done studies to determine whether towed decoys could improve the survivability of the F/A-18C/D. DOD’s Joint Tactical Electronic Warfare Study and an analysis conducted by the Center for Naval Analyses concluded that the addition of a towed decoy system to the F/A-18C/D would provide a greater increase in survivability for that aircraft than any jammer, including the ASPJ. In limited flight testing on the F/A-18C/D, the Navy demonstrated the ALE-50 decoy could be deployed from either a wing station or the centerline station of the aircraft. While the Navy acknowledges that towed decoys can enhance aircraft survivability, it does not consider these flight tests to have been successful because of the following suitability concerns. According to the Navy (1) the tow line can come too close to the horizontal tail or the trailing edge flap when deployed from a wing station, making it unsafe or (2) the tow line can be burned off by the engine exhaust or separated by abrasion if deployed from the centerline station. The Navy’s report on the wing station testing stated that tow line oscillation led to lines breaking on several flights, but did not state that the decoy system was a flight safety risk nor that there was any contact with the horizontal tail or flaps. Concerning the centerline station tests, several tow lines were burned off or otherwise separated from the aircraft by abrasion during maneuvering flights. A reinforced tow line later solved these problems and the Navy is continuing testing on the F/A-18C/D from the centerline station. Based on these test results, the Navy now intends to deploy the ALE-50 decoy from the centerline of the fuselage of the F/A-18E/F. The Navy also maintains that even if the decoy could be successfully deployed from the F/A-18C/D wing or centerline station, for actual operations, it could not afford to trade a weapon or fuel tank on a wing or centerline station for a towed decoy system. Further, the Navy considers modification of the C/D model’s fuselage for internal carriage of the decoy to be unaffordable due to volume, weight, power, and cooling constraints that would have to be addressed. The Air Force has modified a wing pylon to successfully deploy towed decoys from the F-16’s wing while avoiding major aircraft modifications and without sacrificing a weapons station or a fuel tank. The Navy, however, has not done the technical engineering analyses to determine the specific modifications necessary to accommodate a towed decoy on the F/A-18C/D either from the wing or the centerline without affecting the carriage capability unacceptably. Congress has expressed concerns regarding F/A-18C/D survivability. The Report of the Senate Appropriations Committee on the National Defense Appropriations Act for Fiscal Year 1997 directed the Navy to report on the advantages and disadvantages of using various electronic warfare systems to improve F/A-18C/D survivability. In addition, Congress provided $47.9 million in fiscal year 1997 funding not requested by DOD to buy 36 additional ASPJs for 3 carrier-deployed squadrons to meet contingency needs. The Navy could have addressed the congressional concern for C/D survivability in the required report by including analysis of the improvement offered by incorporating the ALE-50 and RFCM towed decoy systems. In completing the required report, however, the Navy did not include any analysis of survivability benefits from using towed decoys because it maintains, as described above, that there are unacceptable impacts associated with towed decoys on the F/A-18C/D. In commenting on a draft of this report, DOD agreed that towed decoy systems could enhance aircraft survivability, but stated the Navy had conducted an engineering analysis that concluded any installation option of a towed decoy on the F/A-18C/D has unacceptable operational and/or safety of flight impacts. In response to our request for this analysis, the Navy provided us with a paper discussing the feasibility of installing a towed system on the F/A-18C/D. This paper concluded that the options considered had risks or created operational concerns but did not conclude that these options were unacceptable. Furthermore, the paper did not consider all possible options. With regard to the safety of flight issue, the Navy stated that the decoy or towline might contact aircraft control surfaces such as the flaps or the horizontal stabilizers if deployed from a wing station. The Navy’s summary of wing station test results, however, does not show any evidence of such contact. The Navy has expressed no concern about a safety of flight issue when deploying the decoy along the aircraft’s centerline and continues to fly test missions with the towed decoy, deploying it from a pod on the centerline of an F/A-18D aircraft. Furthermore, the Navy intends to install the system in the fuselage and deploy towed decoys from the centerline of the E/F model aircraft. In addition, the Air Force incorporated the ALE-50 on to the F-16 without loss of a weapon station or fuel tank and without having to undertake major aircraft modifications, demonstrating that it is possible to adapt a towed decoy system to an existing aircraft without creating unacceptable tactical impacts. DOD did not concur with the recommendations that were set forth in a draft of this report. In the draft, we had suggested that (1) in preparing its congressionally required report, DOD consider F/A-18C/D aircraft upgraded with RFCM and ALE-50 towed decoy systems and (2) the Navy do the necessary engineering analyses of the modifications needed to integrate towed decoys into F/A-18C/D and other current Navy aircraft. DOD completed the congressionally required report without implementing our first draft recommendation. We continue to believe, however, that the Navy needs to explore ways to improve the survivability of its current aircraft and, therefore, should do a detailed engineering analysis of the modifications needed to adapt the towed decoy to the F/A-18C/D. DOD’s comments are reprinted as appendix I in this report. We recommend that the Secretary of Defense direct the Secretary of the Navy to make a detailed engineering analysis of the modifications needed to adapt the towed decoy to the F/A-18C/D. In light of the demonstrated improvement in survivability that analyses and test results indicate towed decoy systems can provide, and recognizing that in the year 2010 almost 50 percent of the Navy’s tactical fighter inventory will still be current generation fighter aircraft such as the F/A-18C/D, Congress may wish to direct the Navy to find, as it has done for its F/A-18E/F and the Air Force has done for the F-16, cost-effective ways to improve the survivability of its current aircraft. To accomplish our objective of determining whether towed decoys could improve survivability of Air Force and Navy aircraft, we examined DOD and contractor analyses of adding towed decoy systems and reviewed Air Force and Navy ALE-50 test results from testing on a variety of aircraft. We interviewed officials from the Office of the Secretary of Defense, the Navy, and the Air Force involved in the acquisition and testing processes of towed decoy systems. We also interviewed contractor personnel involved in the development, integration, and/or production of towed decoy systems. We performed our work at the Offices of the Secretaries of Defense, the Navy, and the Air Force; F-15, F-16, and B-1 System Program Offices at the Air Force Material Command, Wright-Patterson Air Force Base, Ohio; F/A-18 and Tactical Air Electronic Warfare Program Offices at the Program Executive Office for Naval Tactical Aviation, Naval Air Systems Command, Washington, D. C.; the 53rd Wing and Air Force Operational Test and Evaluation Detachment, Eglin Air Force Base, Florida; and selected contractor locations, including McDonnell-Douglas Aircraft, Lockheed-Martin, and Rockwell International. We performed our review from February 1996 to July 1997 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Defense, the Navy, and the Air Force; the Director, Office of Management and Budget; and other congressional committees. We will make copies available to others upon request. Please contact me on (202) 512-2841, if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix II. Following are our comments on the Department of Defense’s (DOD) letter dated May 5, 1997. 1. Our draft report included references to the comparability of F/A-18E/F and C/D survivability, and it was provided to DOD for comment prior to the decision to produce the F/A-18E/F. As DOD states, this decision has now been made. Consequently, we have deleted references to the comparability of the F/A-18E/F and C/D models. The issue of F/A-18C/D survivability remains important, however, because E/F models will not replace all of the current C/D models in the inventory in the foreseeable future. 2. Test results for towed decoys on the F/A-18C/D and other information provided by DOD and the Navy do not support DOD’s statements. The safety of flight issue, according to the Navy, arises from the concern that the decoy or towline might contact aircraft control surfaces such as the flaps or the horizontal stabilizers if deployed from a wing station. The Navy’s summary of wing station test results does not show any evidence of such contact. According to the test report, the Navy did find that aircraft vortices behind the wing created aerodynamic instability in the towline, but the report does not conclude that this potentially jeopardized aircraft flight safety. Additionally, the Navy has expressed no concern about a safety of flight issue when deploying the decoy along the aircraft’s centerline, and use of a reinforced towline appears to have eliminated the burnoff/abrasion problem. Thus, the Navy continues to fly test missions with the towed decoy, deploying it from a pod on the centerline of an F/A-18D aircraft, and intends to install the system in the fuselage and deploy towed decoys from the centerline of the E/F model aircraft. This evidence indicates that Navy concerns about a high degree of difficulty, and severe volume, weight, power, cooling, and aircraft aerodynamics issues associated with installing towed decoys may not be insurmountable. As for unacceptable tactical impacts associated with towed decoy installation, the Air Force has overcome this problem on the F-16, and we presume that the Navy may also be able to find an integration solution for the F/A-18C/D that avoids unacceptable tactical impacts if it continues to pursue alternatives. The Navy did not abandon towed decoy installation for the F/A-18E/F because of early problems with abrasion and heat breaking the towline. Instead, it pursued alternatives. The solutions for the F-16 and F/A-18E/F do not have to be the only alternatives considered for the F/A-18C/D. 3. The Navy and DOD did provide us with additional information intended to bolster its broad assertion of unsuitability. However, the information provided was not an “engineering analysis” (implying a technical document of some depth), but is instead a rather superficial “installation feasibility study” that while identifying risk areas associated with installing the towed decoy on the F/A-18C/D does not conclude that all installation options have unacceptable operational and/or safety of flight impacts. 4. According to the Navy’s feasibility study, 220 pounds is the weight of the towed decoy system mounted in a pod. According to the same study, if the system’s launch controller is mounted in the aircraft’s fuselage, the bring-back weight is reduced by only 140 pounds. In any case, since studies and test results indicate the ALE-50 system can provide significant improvements in survivability, the Navy needs to determine whether loss of a relatively small amount of bring-back weight is worth the increased risk of losing aircraft to radar-guided missiles. Michael Aiken Terrell Bishop Paul Latta Terry Parker Charles Ward The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Department of Defense's (DOD) acquisition plans for the ALE-50 towed decoy system and the Radio Frequency Countermeasures System (RFCM), which includes a more advanced towed decoy, focusing on whether towed decoys could improve the survivability of certain Navy and Air Force aircraft. GAO noted that: (1) DOD's effort to improve the survivability of its aircraft through the use of towed decoys has demonstrated positive results; (2) according to test reports and test officials, the ALE-50 has done very well in effectiveness testing and the future RFCM decoy system is expected to be even more capable; (3) the Air Force is actively engaged in efforts to field towed decoy systems on a number of its current aircraft, including the F-15, F-16, and B-1, while the Navy is planning towed decoys only for its future F/A-18E/F; (4) in the year 2010, almost 50 percent of the Navy's tactical fighter inventory will still be current generation fighter aircraft such as the F/A-18C/D, even if new F/A-18E/Fs are procured at the rates desired by the Navy between now and then; and (5) improving the survivability of the F/A-18C/D, as well as other current Navy and Marine Corps aircraft, potentially offers the opportunity to save additional aircraft and aircrew's lives in the event of future hostilities and also addresses congressional concerns expressed for F/A-18C/D survivability.
Although DHS reported many efforts under way and planned to improve the cyber content of sector-specific plans, sector-specific agencies have yet to update their respective sector-specific plans to fully address key DHS cyber security criteria. For example, of the 17 sector-specific plans, only 9 have been updated. Of these 9 updates, just 3 addressed missing cyber criteria, and those 3 involved only a relatively small number (3 or fewer) of the criteria in question. Sector-specific agencies did not fully address missing cyber criteria in their plans in large part due to the following: They were focused more on the physical rather than the cyber security aspects of the criteria in preparing their plans. They were unaware of the cyber criteria shortfalls identified in 2007. DHS’s guidance on updating sector plans did not specifically request the agencies to update the cyber security aspects of their plans. The continuing lack of plans that fully address key cyber criteria has reduced the effectiveness of the existing sector planning approach and thus increases the risk that the nation’s cyber assets have not been adequately identified, prioritized, and protected. Most sector-specific agencies developed and identified in their 2007 sector plans those actions—referred to by DHS as implementation actions— essential to carrying out the plans; however, since then, most agencies have not updated the actions and reported progress in implementing them as called for by DHS guidance. Specifically, in response to 2006 guidance that called for agencies in developing implementation actions to address three key elements (action descriptions, completion milestones, and responsible parties), most sectors initially developed implementation actions that fully addressed the key elements. However, while 2008 guidance called for implementation actions to be updated and for sector reports to include progress reporting against implementation action milestone commitments, only five sectors updated their plans and reported on implementation progress. DHS attributed this in part to the department not following up and working to ensure that all sector plans are fully developed and implemented in accordance with department guidance. The lack of complete updates and progress reports are further evidence that the sector planning process has not been effective and thus leaves the nation in the position of not knowing precisely where we stand in securing cyber-critical infrastructures. Although DHS reported many efforts under way and planned to improve the cyber content of sector-specific plans, sector-specific agencies have made limited progress in updating their sector-specific plans to fully address key cyber elements. Further, although the agencies produced narratives on sector activities, they have not developed effective implementation actions and reported on whether progress is being made in implementing their sector plans. This means that as a nation, we do not know precisely where we are in implementing sector plans and associated protective measures designed to secure and protect the nation’s cyber and other critical infrastructure, despite having invested many years in this effort. This condition is due in part to DHS not making sector planning a priority and as such, not managing it in a way that fully meets DHS guidance. These conclusions, taken as a whole, further raise fundamental questions about whether the current approach to sector planning is worthwhile and whether there are options that would provide better results. Consequently, it is essential that federal cyber security leaders— including DHS and the to-be-appointed Cybersecurity Coordinator—exert their leadership roles in this area by, among other things, determining whether it is worthwhile to continue with the current approach as implemented or consider if proposed options provide more effective results. To do less means the nation’s critical infrastructure sectors will continue to be at risk of not being able to adequately protect their cyber and other critical assets or be prepared to identify and respond to cyber threats and vulnerabilities. We recommend that the Secretary of Homeland Security, consistent with any direction from the Office of the Cybersecurity Coordinator, assess whether the existing sector-specific planning process should continue to be the nation’s approach to securing cyber and other critical infrastructure and, in doing so, consider whether proposed and other options would provide more effective results. If the existing approach is deemed to be the national approach, we also recommend that the Secretary make it, including the cyber aspects, an agency priority and mange it accordingly. This should include collaborating closely with other sector-specific agencies to develop sector-specific plans that fully address cyber-related criteria in the next release of the plans, and sector annual reports that (1) include updated implementation actions and associated milestones and (2) report progress against plan commitments and timelines. DHS concurred with our recommendations but took exception with certain report facts and conclusions that it said formed the basis for our recommendations. Specifically, in an email accompanying its written response—which was signed by the Director, Departmental GAO/OIG Liaison Office and is reprinted in appendix II—DHS said it concurred with our recommendation. In its written response, DHS added that it supported continually assessing the effectiveness of the sector approach and identifying and implementing improvements as appropriate. The department also stated in its written response that alternative options can be explored and implemented along with the current sector approach, rather than a binary choice between continuing the existing sector-specific planning approach and other options. We agree such efforts can be pursued in parallel and that doing them in this manner would be consistent with our recommendations. The department also commented that the report does not give due consideration to many of the ongoing sector and cross-sector cyber security activities identified in the annual reports and briefed to us. We recognize that DHS has multiple ongoing efforts to improve critical infrastructure protection (CIP) planning and implementation, and our report conclusions state this point. While our report for the sake of brevity does not include all of DHS’s efforts, it does include illustrative examples throughout as part of giving a fair and balanced view of DHS’s efforts in this area. Notwithstanding the concurrence discussed above, DHS in its written response took exception with our report’s facts and conclusions in nine areas—referred to by DHS as general items. Each of these general items, along with our response, is summarized below. General item 1: With regard to our report section that states that the sector-specific agencies have yet to update their respective plans to fully address key cyber security criteria as called for by DHS, the department commented that it established a risk management framework (as part of the 2006 National Infrastructure Protection Plan or NIPP) which called for cyber and other elements (i.e., human, physical) to be addressed. DHS added that its 2006 SSP guidance did not call for these elements to be addressed separately in the plans and at that time GAO had not identified the 30 cyber criteria in DHS’s guidance; therefore, when the 2007 SSPs were issued they did not fully address the 30 cyber criteria (which is consistent with our October 2007 report findings). To address this situation, DHS said it revised the NIPP in early 2009 to, among other things, provide for more robust coverage of cyber security using as a basis the 30 cyber criteria identified by GAO. In addition, in its guidance to the sector agencies in developing their 2010 SSPs, DHS directed the agencies to update their plans using the revised NIPP and in doing so, to fully address the 30 GAO-identified cyber criteria. GAO response: It is a positive development that DHS has issued guidance directing the sector agencies to fully address missing cyber criteria as part of having the sectors rewrite their SSPs in 2010. In addition, while we agree with DHS that its 2006 guidance did not call for cyber to be addressed separately in each SSP section, it is important to point out that DHS’s 2006 guidance nonetheless called for the sectors to address in the SSPs how they planned to secure the cyber aspects of their critical infrastructures. Consequently, the 2007 SSPs were to have addressed cyber in order to be in compliance with DHS’s guidance. In 2007, we initiated a review to assess the extent to which these plans addressed cyber. As part of that review, we analyzed the 2006 guidance and identified 30 cyber-related criteria that the critical infrastructure sectors were to address in their SSPs. Our analysis of the plans found them to be lacking in the cyber area and we subsequently recommended that DHS request that by September 2008, the sector agencies update their SSPs to address missing cyber-related criteria. DHS agreed with this recommendation, and stated that the department had initiated efforts to implement it. However, in following up on this recommendation and analyzing the cyber content of the sectors’ 2008 SSP updates (which was the first objective of this report), only 3 of the 17 sectors had updated their plans to address missing criteria. General item 2: Regarding the section of our report stating that the reason sector-specific agencies did not fully address missing cyber criteria in their plans was due in part to the fact that they were unaware of the cyber criteria shortfalls identified in our 2007 report, DHS described several initiatives it had taken to inform the agencies of their planning shortfalls. GAO response: We recognize that DHS has taken actions to inform the agencies of the shortfalls identified in our 2007 report. Accordingly, we cited illustrative examples of such actions throughout our report. Nonetheless, when we interviewed sector agencies officials, several stated that they were unaware of the GAO identified shortfalls, which raises questions about the effectiveness of DHS’s efforts. General item 3: DHS stated that while the SSPs have not been fully updated to include ongoing and planned cyber security activities, it does not mean there is a lack of cyber security planning in the sectors or that the planning to date has been ineffective. DHS also reiterated its earlier point that our report does not take into account many of its ongoing activities in the sector related to cyber security. In addition, the department commented that all the sectors reported on their progress in the 2008 annual reports. GAO response: We recognize that DHS has had many ongoing efforts related to improving the cyber content of SSPs and illustrative examples are provided throughout our report. However, the sector-specific agencies’ limited progress in addressing missing cyber content in their SSPs indicates a lack of effectiveness of planning. Specifically, of the 17 sector- specific plans, only 9 have been updated. Of these 9 updates, just 3 addressed missing cyber criteria, and those 3 only involved a relatively small number (3 or less) of the criteria in question. In our view, this continuing lack of plans that fully address key cyber criteria has reduced the effectiveness of the existing sector planning approach and thus increased the risk that the nation’s cyber assets have not been adequately identified, prioritized, and protected. Further, while we agree with DHS that the sectors reported aspects of progress in the 2008 annual reports, only five sectors updated and reported on the extent of progress in carrying out their implementation actions as called for by DHS guidance, while the other 12 did not. This level of reporting is not sufficient for evaluating sector-wide progress and raises concerns about the effectiveness of these annual reports as a tool to measure progress. General item 4: DHS commented that (1) we expanded the scope of this engagement beyond the initial focus on coverage of cyber security in the SSPs to encompass the entire sector planning approach and that DHS was not asked to provide a broader update on the public-private partnership, and (2) our draft report did not include information on DHS’s numerous ongoing activities with the agencies and sectors related to cyber security. GAO response: With regard to the first comment, the focus of our engagement was on the cyber security aspects of the sector-specific plans and progress reporting, which are an important part of the sector planning approach. Consequently, even when taking into consideration DHS’s ongoing activities with the agencies and sectors related to cyber security, the planning and reporting shortfalls we identified indicate a lack of effectiveness with the current sector approach. Regarding DHS’s second comment, we recognize that DHS has multiple ongoing efforts to improve CIP planning and implementation, and our report includes illustrative examples of DHS’s efforts to do so. As a case in point, on July 27, 2009, we briefed DHS using the presentation slides in this report and updated the slides to incorporate examples (in addition to the ones we had already included in the briefing) that DHS described to us during that meeting. Although DHS has many ongoing efforts related to improving the cyber content of SSPs, our analysis showed that there had been limited progress in addressing missing cyber content in the SSPs since our 2007 recommendation; this indicates to us that the planning process lacks effectiveness, which is why we recommended that DHS assess whether improvements are needed to the current process. General item 5: In regard to our report stating that DHS guidance calls for the sector agencies to annually review and update as appropriate their sector plans, which serve as a means to provide an interim snapshot of where agencies stand in addressing their gaps and is why we used it as a basis to assess progress, DHS said the SSPs are intended to be strategic, three-year plans and are not meant to provide a snapshot of where agencies stand in addressing their gaps and should not be used as a basis to assess progress in CIP protection. GAO response: Our report acknowledges that the SSPs are high-level strategic plans and the sector annual reports serve as the primary means of assessing progress in improving CIP protection. Specifically, as stated in our report, the annual reports are used to, among other things, capture changes in sector programs and assess progress made against goals set in the SSPs. However, it should be noted that annual updates to the SSPs also include information on progress being made against SSP goals and as such serve as a source of evidence on where agencies stand in addressing their gaps and provide a basis to assess progress in CIP protection. Specifically, the 2008 updates we reviewed and analyzed included key information on what sector agencies had (or had not) done to address missing cyber security content that we identified in their 2007 SSPs. General item 6: In response to our reporting that most agencies had not updated their implementation actions and reported progress in implementing them as called for by DHS guidance, DHS commented that many of the implementation actions were one-time actions that were completed in 2007 or 2008, and that others are of an ongoing, continuous nature. The department added that since the vast majority of these items were completed, DHS made adjustments in 2009 to the reporting process to more accurately capture the progress of CIP efforts, and that DHS is now working with the sectors toward the development of outcome-based metrics designed to measure the beneficial value of activities in mitigating CIP risks. GAO response: We recognize that many of the implementation actions were one-time or ongoing actions, but DHS’s guidance nonetheless called for the sectors to update the actions and report on the extent of progress in achieving the actions. Further, we agree that DHS has made recent positive changes to their reporting processes to more accurately capture progress. However, as noted in our report, most sectors had not reported in their 2008 sector annual reports that their implementation actions were completed, which showed that the existing progress reporting process was not totally effective. General item 7: In response to our reporting that DHS’s lack of follow up to address SSP planning shortfalls showed it was not making sector planning a priority, the department stated that it (1) is actively engaged with the agencies and sectors, (2) assists the sectors with planning and reporting on an ongoing basis, and (3) continually evaluates and improves these processes with input from the sectors. GAO response: We recognize that DHS has multiple ongoing efforts to improve CIP planning and implementation, and our report includes illustrative examples of DHS’s efforts. Despite these efforts, DHS’s limited progress in addressing missing cyber content in the SSPs since our 2007 recommendation and the lack of updated implementation actions and progress reporting—coupled with the department’s limited follow up to correct these conditions—led us to conclude that DHS is not making sector planning a priority. General item 8: DHS stated that although our report cited the work and studies of an expert commission and the President’s cybersecurity working group, including the issues they raised with the current sector planning approach, we did not discuss the reports with the department. GAO response: On July 27, 2009, we briefed DHS on our findings, conclusions, and recommendations, which included descriptions of the work performed by these two groups. Specifically, in advance of our meeting, we provided the department with a draft of our briefing presentation slides for review and then met to discuss each slide of our presentation, including those addressing the work of these two expert groups. General item 9: In citing our recommendation that calls for DHS to collaborate closely with the sector-specific agencies to develop SSPs that fully address cyber-related criteria, the department stated this collaboration has already begun as part of the department’s current effort to have the sector agencies update their SSPs for issuance in 2010. GAO response: This effort to collaborate with the agencies is consistent with our recommendations. As we agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time we will send copies of this report to interested congressional committees, the Secretary of Homeland Security, and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact Dave Powner at 202-512-9286 or pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The nation's critical infrastructure relies extensively on computerized information technology (IT) systems and electronic data. The security of those systems and information is essential to the nation’s security, economy, and public health and safety. To help address critical infrastructure protection, federal policy established a framework for public and private sector partnerships and identified 18 critical infrastructure sectors (e.g., Banking and Finance; Information Technology; Telecommunications; Energy; Agriculture and Food; and Commercial Facilities). The Department of Homeland Security (DHS) is a key player in these partnerships and is responsible for issuing guidance to direct the sectors to develop plans addressing how key IT systems and data are to be secured, commonly referred to as cyber security. In June 2006, DHS issued the National Infrastructure Protection Plan (NIPP) as a road map for how DHS and other relevant stakeholders are to enhance the protection of critical infrastructure and how they should use risk management principles to prioritize protection activities within and across the sectors in an integrated, coordinated fashion. Lead federal agencies—referred to as sector-specific agencies—are responsible for coordinating critical infrastructure protection efforts with public and private stakeholders within each sector. For example, the Department of Treasury is responsible for the banking and finance sector while the Department of Energy is responsible for the energy sector. Further, the NIPP called for the lead federal agencies to develop sector-specific plans and sector annual reports to address how the sectors would implement the national plan, including how the security of cyber and other (physical) assets and functions was to be improved. More specifically, it stated that the sector plans were to, among other things, describe how the sector will identify and prioritize its critical cyber and other assets and define approaches to be taken to assess risks and develop programs to protect these assets; and sector annual reports were to provide status and progress on each sector’s efforts to carry out the sector plans. In response, the sector-specific agencies developed and issued plans for their sectors in May 2007. Subsequently, in examining these initial plans to determine the extent to which they addressed cyber security, we reported in October 2007, that none of the plans fully addressed all 30 cyber security-related criteria we identified in DHS guidance (in performing that work, we (1) analyzed DHS guidance provided to the critical infrastructure sectors that stated how the sectors should address cyber topics in their sector-specific plans, (2) identified 30 cyber-related criteria, and (3) shared them with responsible DHS officials who largely agreed that these were the correct criteria to use), and recommended that DHS request that by September 2008 the sector-specific agencies’ plans address the cyber-related criteria that were only partially addressed or not addressed at all. Since then, an expert commission—led by two congressmen and industry officials— studied and reported in late 2008 on the public-private partnership approach, including sector planning and other aspects of U.S cyber security policy. More recently, the President established a White House cyber security working group that conducted and completed a “60-day” review of U.S. cyber policy, including public- private partnerships and sector planning, that found that while sector and other groups involved in the partnership performed valuable work, there were alternative approaches for how the federal government could work with the private sector and recommended that these options be explored, and recommended, among other things, establishing a Cybersecurity Coordinator’s position within the White House to develop a new U.S. cyber policy and to coordinate cyber security efforts across the federal government. tegic nd InterntionSdie, Securing Cyberspace for the 44th Presidency, A Report of the CSIS Commission on Cybersecurity for the 44th Presidency (Washington, D.C., Decemer 2008); nd The White House, Cyberspace Policy Review: Assuring a Trusted and Resilient Information and Communications Infrastructure (Washington, D.C., My 29, 2009). As agreed, our objectives were to determine the extent to which sector plans have been updated to fully address cyber assess whether these plans and related reports provide for effective implementation. For the first objective, we met with the sector-specific agencies to obtain updates to the May 2007 initial plans issued for the 17 critical infrastructure sectors. We then analyzed any updated plans using the 30 cyber criteria we identified in DHS guidance on how such plans were to be developed. Attachment I shows the 30 criteria (organized by eight major reporting sections called for in the DHS guidance). In particular, we focused on assessing the cyber criteria not fully addressed in the May 2007 plans. rrently, there re 18 ector; however, the criticl mctring ector wasablihed in 2008 nd has not yet completed ector-pecific pln. Objectives, Scope, and Methodology In analyzing the updated plans against the 30 criteria, we categorized the extent to which the plans addressed criteria using the following: fully addressed: the plan specifically addressed the cyber-related criteria partially addressed: the plan addressed parts of the criteria or did not clearly address not addressed: the plan did not specifically address the cyber-related criteria Further, we also interviewed responsible sector-specific agency officials to, among other things, verify our understanding of their updated sector plans and to validate the accuracy of our analyses of the extent to which additional cyber-related criteria had been addressed in them. For the second objective, we identified requirements in DHS guidance that specified how the sectors were to update and report on their progress in carrying out planned actions—referred to by the department as implementation actions, and compared these requirements to what the sectors had reported in their 2008 annual reports. We focused on the implementation actions, because they are important for reporting and assessing the progress and effectiveness of the sector-specific plans. Where gaps existed, we collaborated with the sector officials to obtain any additional information that would fulfill the requirements and to determine the cause and impact of any remaining gaps. l mctring ector did not hve ny nnual report. Although DHS reported many efforts under way and planned to improve the cyber content of sector-specific plans, sector-specific agencies have yet to update their respective sector-specific plans to fully address key DHS cyber security criteria. For example, of the 17 sector-specific plans, only 9 have been updated. Of these 9 updates, just 3 addressed missing cyber criteria, and those 3 involved only a relatively small number (3 or fewer) of the criteria in question. Sector-specific agencies did not fully address missing cyber criteria in their plans in large part due to the following: They were focused more on the physical rather than the cyber security aspects of the criteria in preparing their plans. They were unaware of the cyber criteria shortfalls identified in 2007, and DHS’s guidance on updating sector plans did not specifically request the agencies to update the cyber security aspects of their plans. Most sector-specific agencies developed and identified in their 2006 sector plans those actions—referred to by DHS as implementation actions—essential to carrying out the plans; however, since then, most agencies have not updated the actions and reported progress in implementing them as called for by DHS guidance. Specifically, in response to 2006 guidance that called for agencies in developing implementation actions to address three key elements (e.g., action descriptions, completion milestones), most sectors initially developed implementation actions that fully addressed the key elements; however, while 2008 guidance called for implementation actions to be updated and for sector reports to include progress reporting against implementation action milestone commitments, only five sectors updated their plans and reported on progress against implementation actions. DHS attributed this in part to the department not following up and working to ensure that all sector plans are fully developed and implemented in accordance with department guidance. The lack of complete updates and progress reports is further evidence that the sector planning process has not been effective and thus leaves the nation in the position of not knowing precisely where it stands in securing its cyber and other critical infrastructure. Not following up to address these conditions also shows DHS is not making sector planning a priority. Further, the recent studies by the President’s working group and expert commission also identified shortfalls in the effectiveness of the current public- private partnership approach and related sector planning and offered options for improving the process. Given this, it is essential that DHS determine whether the current process should continue to be the national approach and thus worthy of further investment Accordingly, we are making recommendations to the Secretary of Homeland Security, consistent with any direction from the Office of the Cybersecurity Coordinator, to assess whether the existing sector-specific planning processes should continue to be the nation’s approach to securing cyber and other critical infrastructure. If the existing approach is deemed to be the national approach, we also recommend that the Secretary make it an agency priority and manage it accordingly, including collaborating closely with other sector-specific agencies to develop (1) sector plans that fully address cyber-related criteria and (2) sector annual reports that include implementation actions and milestones and progress reporting against plan commitments and timeline. In oral and written comments on a draft of this briefing, DHS officials, including the Director of Infrastructure Protection’s Partnership and Outreach Division, which is responsible for sector-specific planning, commented on two areas. Specifically, they stated that that the sector agencies had made more progress in implementing cyber- related criteria than reported in our briefing due to other ongoing DHS and sector efforts outside the sector plans and sector annual reports (implementation actions), which were the focus of the briefing. For example, DHS officials said its cyber division works regularly with many sectors on cyber assessments, exercises, and information sharing. While on the surface these may appear to improve cyber security, the officials did not show how these activities helped the agencies address missing cyber-related criteria or effectively implement their plans. The officials also said that focusing on the agencies’ efforts the year after they issued their sector plans is premature as the agencies have until 2010 to rewrite and reissue their next sector plans. This notwithstanding, DHS’s guidance calls for the sector agencies to annually review and update as appropriate their sector plans, which is a means to provide an interim snapshot of where agencies stand in addressing their gaps and is why we used it as a basis to assess progress. Consistent with the Homeland Security Act of 2002, Homeland Security Presidential Directive-7 identified DHS as the principal federal agency to lead, integrate, and coordinate implementation of efforts to protect critical infrastructure and key resources; and lead federal agencies, referred to as sector-specific agencies, as responsible for coordinating critical infrastructure protection efforts with the public and private stakeholders in their respective sectors. It also required DHS to develop a plan that outlines national goals, objectives, milestones, and key initiatives necessary for fulfilling its responsibilities for physical and cyber critical infrastructure protection. In 2006, DHS issued the plan—commonly referred to as the NIPP—which, in addition to addressing the above, is to serve as a road map for how DHS and other relevant stakeholders are to use risk management principles to prioritize protection activities within and across sectors in an integrated, coordinated fashion. Further, the NIPP required the lead agencies of the 17 critical infrastructure sectors to develop a sector-specific plan (SSP) to address how the sector’s stakeholders would implement the national plan and how each sector would improve the security of its assets systems, networks, and functions. In addition, as required by the NIPP, the sector-specific agencies are to provide updates on sector progress with their SSPs, including efforts to identify, prioritize, and coordinate the protection of the sector’s critical infrastructure, to DHS on an annual basis. DHS is responsible for incorporating these reports into an overall critical infrastructure/key resources report, called the National Critical Infrastructure/Key Resources Protection Annual Report, which is due to the Executive Office of the President by September of each year. Sector-specific agencies are to work in coordination with relevant government and private-sector representatives to develop and update the SSPs. Table 1 shows the designated agency for each sector. The sector-specific plans are to describe how the sector will identify and prioritize its critical assets, including cyber assets such as networks; identify the approaches the sector will take to assess risks and develop programs to manage and mitigate risk; define the security roles and responsibilities of members of the sector; and establish the methods that members will use to interact and share information related to the protection of critical infrastructure. In response, the sector-specific agencies developed and issued SSPs for their sectors in May 2007. Subsequently, we examined these plans to determine the extent to which they addressed cyber security and reported in October 2007 on the extent to which the sectors addressed aspects of cyber security in their plans. Specifically, we reported that the results varied in that none of the plans fully addressed all 30 cyber security-related criteria. We also reported that several plans—including the information technology and telecommunications sectors—fully addressed many of the criteria and others—such as agriculture and food and commercial facilities—were less comprehensive. Further, we recommended that DHS request that by September 2008 the sector-specific agencies’ plans address the cyber-related criteria that were only partially addressed or not addressed at all. In its October 2007 response to our report, DHS agreed with our recommendation and stated it had initiated actions to implement it. Since our 2007 report, an expert commission (led by two congressmen and industry officials) and a White House working group (established by the President) studied and reported on the public-private partnership approach and related issues such as sector planning as well as other aspects of U.S cyber security policy. Specifically, In August 2007, a commission—commonly referred to as the Commission on Cybersecurity for the 44th Presidency—was established to examine the (1) adequacy of U.S. cyber strategy, including public-private partnerships and the sector approach and (2) identify areas for improvement. In December 2008, the commission reported, among other things, that the current public-private partnership and sector planning approach had serious shortcomings such as overlapping roles and responsibilities and duplication of effort. The commission made 25 recommendations aimed at addressing these and other shortfalls with the strategy and its implementation. tegic nd InterntionSdie, Securing Cyberspace for the 44th Presidency, A Report of the CSIS Commission on Cybersecurity for the 44th Presidency (Washington, D.C., Decemer 2008); nd The White House, Cyberspace Policy Review: Assuring a Trusted and Resilient Information and Communications Infrastructure (Washington, D.C., My 29, 2009). In February 2009, the President directed the National Security Council and the Homeland Security Council to conduct a comprehensive “60-day review” of all U.S. cyber policies and structures. With regard to public-private partnerships, which include sector planning, the councils reported in May 2009 that the sector and other groups involved in this area performed valuable work but that there was a proliferation of plans and recommendations that resulted in government and private sector personnel and resources being spread across a multitude of organizations engaged in sometimes duplicative or inconsistent efforts. The review concluded that there are alternative approaches for how the federal government can work with the sectors and recommended that these options be explored. At this time, the President also created the office of Cybersecurity Coordinator—who is to be part of the White House’s National Security Staff and National Economic Council—to, among other things, assist in developing a new U.S. cyber policy. The Cybersecurity Coordinator position has not yet been filled. Sector-Specific Agencies Have Yet to Update Their Respective Sector-Specific Plans to Fully Address Key Cyber Security Criteria as Called for by DHS Guidance In response to our recommendation and as part of ongoing DHS efforts, the department initiated multiple efforts to improve the cyber content of their SSPs. Examples include the following: February 2008, DHS invited all sectors (and nine accepted) to meet with cyber experts within DHS’s National Cyber Security Division to support the development of increased cyber content in SSPs. April 2008, DHS issued guidance to agencies on how to report on the progress of annual reviews of the SSPs. March 2009, DHS released guidance that specifically requested that agencies, as a part of their 2010 SSP rewrites, fully address all cyber-related weaknesses, including those identified in our October 2007 report. 23 having personnel from its Software Assurance Program work with public and private sector partners to develop a process for identifying exploitable software before security breaches occur. However, despite these steps, only 9 of the 17 SSPs have been updated while 8 have not. In addition, of the 9, only 3 have been revised to address missing cyber-related criteria, and those changes only involved addressing a relatively small number (3 or fewer) of missing criteria. Specifically: In developing the original Chemical sector SSP, DHS had fully or partially addressed 29 criteria but did not address 1. The current version of the SSP fully addressed 1 of the criteria previously assessed as partial. In developing the original Commercial Facilities sector SSP, DHS had fully or partially addressed 20 criteria and did not address 10. The current version of the SSP fully addressed 1 cyber-related criterion that was previously not addressed and partially addressed 1 cyber-related criterion that was previously not addressed. ly inclde 17 of the 18 ector, as the Criticl Mctring ector wasablihed in 2008 nd has not yet finihed it ector-pecific pln. ire SSP to e revied nd reissued every three ye, it o cll for the ector-pecific gencie to nnually review nd pdte as pproprite their SSP to reflect progress on ction plnned nd nder wy. The gidnce llow gencie the option to report progress vi pdted pln, lit of pdte, or in the case there i no progress to report, memorndm of no ction. Thee 8 were memorndm of no ction. In developing the original Water sector SSP, the Environmental Protection Agency had fully or partially addressed 29 criteria and did not address 1. The current version of the SSP fully addressed 1 cyber-related criterion that was not previously addressed and fully addressed 2 cyber-related criteria that were previously partially addressed. Figure 1 summarizes the extent to which each SSP update addresses the 30 criteria. The sector-specific agencies did not fully address missing cyber-related criteria in their SSP updates in large part due to the following: Agency officials said that in developing their plans, they were focused more on specific (physical) threats to the sector than the cyber security aspects. While DHS began efforts to improve the cyber content of SSPs, sector agency officials stated that DHS did not make them aware of the specific cyber criteria shortfalls we identified and reported on in 2007. While DHS issued SSP (formatting) guidance in 2008, this guidance did not specifically request updates to cyber security aspects of the plans or provide other substantive-type direction. As previously stated, DHS issued guidance in March 2009 that specifically requested that the sectors address cyber criteria shortfalls in their 2010 sector-specific plan revisions. However, until these plans are issued, it is not clear whether they fully address cyber requirements. This notwithstanding, having sector-specific agencies continue to have SSPs that do not fully address key cyber elements has reduced the effectiveness of the existing sector planning approach and thus increases the risk that the nation’s critical cyber assets have not been adequately identified, prioritized, and protected. Sector Plans and Related Reports Do Not Fully Provide for Effective Implementation To provide for effective sector plan implementation, DHS issued guidance that called for the sector-specific agencies to provide for such activities in their SSPs and sector annual reports. Specifically, with regard to the SSPs, the department issued March 2006 guidance directing the sector-specific agencies to develop and incorporate in their SSPs actions and activities—referred to as implementation actions—essential to carrying out the plans and achieving the goal of securing the sectors’ cyber and other assets. According to the guidance, implementation actions are to include (1) a description of the actions necessary to implement the plan, (2) milestones for when the actions are to be accomplished, and (3) the parties responsible for managing and overseeing action execution. Developing and updating implementation actions, including milestones, and responsible parties, is important for reporting and assessing the progress and effectiveness of the sector- specific plans. With regard to sector annual reports, the department issued guidance in March 2008 that called for sector-specific agencies (in their 2008 annual reports to be issued later in 2008) to (1) update implementation actions, and (2) report on the extent of progress in achieving the actions. idnce, DHS refer to thection as n implementtion mtrix. Of the 17 SSPs developed in response to DHS’s guidance, 14 included implementation actions that addressed all three elements: o Banking and Finance, o Chemical, o Commercial Facilities, o Dams, o Defense Industrial Base, o Emergency Services, o Government Facilities, o Information Technology, o National Monuments and Icons, o Nuclear Reactors, o Public Health and Healthcare, o Telecommunications, o Transportation, and o Water. 2 included implementation actions but each only partially addressed the three o Energy, and o Postal and Shipping. rrently, there re 18 ector; however, the criticl mctring ector wasablihed in 2008 nd has not yet completed ector-pecific pln. Of these sectors’ plans, all identified actions and milestones critical to implementation of the plan but did not identify the parties responsible for the specified actions. 1 did not include implementation actions: o Agriculture and Food. In addition, with regard to sector annual reporting, 5 sectors updated and reported on the extent of progress in carrying out their implementation actions, while the other 12 did not. Those that did were o Dams, o Information Technology, o National Monuments and Icons, o Nuclear Reactors, and o Water. l Mctring ector was not reqted to develop nnual report, as the ector wasablihed in erly 2008. tion ction were pdted in one re covered nder the Ncler Rector ector. Those that did not were o Agriculture and Food, o Banking and Finance, o Chemical, o Commercial Facilities, o Defense Industrial Base, o Emergency Services, o Energy, o Government Facilities, o Postal and Shipping, o Public Health and Healthcare, o Telecommunications, and o Transportation. Figure 2 shows by sector, each sector’s progress in developing and updating actions for effective implementation. In addition to these implementation actions, the sectors were to report on sector goals and priorities, sector programs, sector coordination, research and development progress and gaps, funding priorities, sector security practices, and overall progress of critical infrastructure protection efforts. However, these areas, including overall progress, did not specifically address implementation progress with the sector- specific plan. For example, the energy sector reported on, among other things, progress with communicating with sector partners, protecting international energy assets, and collaborations with the Department of Homeland Security. In addition, the communications sector reported on, among other things, progress to narrow key gaps identified in the sector’s 2007 report, and progress with key programs. Despite this, the reporting was not sufficient for evaluating either sector-wide progress with sector-specific plans, or the effectiveness of these plans. The incomplete implementation updates and progress reports are due in part to DHS not following up and working to ensure that all sector plans were fully developed and implemented in accordance with departmental guidance. Specifically, although DHS issued periodic sector-planning guidance, periodically met with sectors officials, and conducted other planning-related activities as discussed above, department officials said their follow-up and oversight of the sector plans did not always result in the sectors developing plans that fully meet DHS guidance. These officials said this occurs due to the fact that as part of DHS’s partnership with the private sector, the parties do not always agree on the extent to which DHS guidance is to be addressed in performing sector planning activities. Consistent with this, our past cyber critical infrastructure protection research and extensive experience at the sector agencies and their private sector counterparts have shown that the public-private partnership is indeed challenging to manage. That research and work also pointed out that DHS nonetheless has a leadership role and responsibility to make sure (1) the partnership works effectively and (2) the sectors plan for and implement efforts aimed at protecting the nation’s cyber and other critical infrastructure, including ensuring the current sector approach is still worth pursuing and considering, where appropriate, alternative approaches. mple, GAO, Critical Infrastructure Protection: Department of Homeland Security Faces Challenges in Fulfilling Cybersecurity Responsibilities, GAO-05-434, (Washington, DC.: My 26, 2005); nd Critical Infrastructure Protection: Progress Coordinating Government and Private Sector Efforts Varies by Sectors' Characteristics, GAO-07-39, (Washington, DC.: Oct. 16, 2006). Shortfalls with Current Public-Private Partnership Approach and Related Sector Planning Highlighted in Recent Studies by Expert Commission and Presidential Working Group In addition to the above briefing results, the recent reports by the Commission on Cybersecurity for the 44th Presidency and President’s 60-day review also identified shortfalls with the current public-private partnership approach and relating sector planning, that show such planning is not effective. To address the shortfalls, the commission and presidential review identified options to be considered as means to improving sector planning. Examples include: The cyber security commission recommended simplifying the sector approach by prioritizing sectors in order to focus planning and other activities on the most important sectors—which it identified as Energy, Finance, Information Technology, and Communications—with the most important cyber assets. The President’s review identified a number of models of effective public-private partnership and planning (e.g., the processes and structures used by the United Kingdom) and suggested that the positive attributes of these models be applied to the sector agencies and related organizations. It also recommended streamlining existing sector and others organizations involved in the partnerships to optimize their capacity to identify priorities and develop response plans. Accordingly, we recommend that the Secretary of Homeland Security, consistent with any direction from the Office of the Cybersecurity Coordinator, assess whether the existing sector-specific planning processes should continue to be the nation’s approach to securing cyber and other critical infrastructure and, in doing so, consider whether proposed and other options would provide more effective results. If the existing approach is deemed to be the national approach, we also recommend that the Secretary make it, including the cyber aspects, an agency priority and manage it accordingly. This should include collaborating closely with other sector-specific agencies to develop sector-specific plans that fully address cyber-related criteria in the next release of sector annual reports that (1) include updated implementation actions and associated milestones and (2) report progress against plan commitments and timelines. 41 Agency Comments and Our Evaluation In oral and written comments on a draft of this briefing, the Director of Infrastructure Protection’s Partnership and Outreach Division and other department officials commented on the following two areas: First, they stated that that they believed that the sector agencies had made more progress in implementing cyber-related criteria than reported in our briefing due to other ongoing DHS and sector efforts outside the SSPs and sector annual reports (implementation actions), which were the focus of the briefing. For example, DHS officials said its National Cyber Security Division works regularly with many sectors on cyber assessments, exercises, and information sharing. In addition, DHS cites two cross-sector cyber working groups that play an important role in advancing cyber security. While these and the other examples provided by DHS on the surface appear to improve cyber security, DHS officials did not show how these activities helped the agencies address missing cyber-related criteria in their SSPs or effectively implement their plans. 42 Agency Comments and Our Evaluation Second, the officials stated that focusing on the agencies’ efforts the year after they issued their sector plans is premature as the agencies have until 2010 to rewrite and reissue their next sector plans. While the NIPP calls for the next SSPs to be issued in 2010, it also calls for the sector-specific agencies to annually review and update as appropriate their SSPs, which is a means to provide an interim snapshot of where agencies stand in addressing their gaps and is why we used it as a basis to assess agency progress. DHS officials also provided technical comments, which we have incorporated into the briefing as appropriate. Section 7: Critical Infrastructure Protection Research and Development (R&D) In addition to the contact named above, the following staff also made key contributions to this report: Gary Mountjoy, Assistant Director; Scott Borre; Rebecca Eyler; Lori Martinez; and Teresa Smith.
The nation's critical infrastructure sectors (e.g., energy, banking) rely extensively on information technology systems. The Department of Homeland Security (DHS) issued guidance in 2006 that instructed lead federal agencies, referred to as sector-specific agencies, to develop plans for protecting the sector's critical cyber and other (physical) infrastructure. These agencies issued plans in 2007, but GAO found that none fully addressed all 30 cyber security-related criteria identified in DHS's guidance and recommended that the plans be updated to address it by September 2008. GAO was asked to determine the extent to which sector plans have been updated to fully address DHS's cyber security requirements and assess whether these plans and related reports provide for effective implementation. To do this, GAO analyzed documentation, interviewed officials, and compared sector plans and reports with DHS cyber criteria. Although DHS reported many efforts under way and planned to improve the cyber content of sector-specific plans, sector-specific agencies have yet to update their respective sector-specific plans to fully address key DHS cyber security criteria. For example, of the 17 sector-specific plans, only 9 have been updated. Of these 9 updates, just 3 addressed missing cyber criteria, and those 3 involved only a relatively small number (3 or fewer) of the criteria in question. Recently DHS issued guidance specifically requesting that the sectors address cyber criteria shortfalls in their 2010 sector-specific plan updates. Until the plans are issued, it is not clear whether they will fully address cyber requirements. Accordingly, the continuing lack of plans that fully address key cyber criteria has reduced the effectiveness of the existing sector planning approach and thus increases the risk that the nation's cyber assets have not been adequately identified, prioritized, and protected. Most sector-specific agencies developed and identified in their 2007 sector plans those actions--referred to by DHS as implementation actions--essential to carrying out the plans; however, since then, most agencies have not updated the actions and reported progress in implementing them as called for by DHS guidance. Specifically, in response to 2006 guidance that called for agencies to address three key implementation elements (action descriptions, completion milestones, and parties responsible), most sectors initially developed implementation actions that fully addressed the key elements. However, while 2008 guidance called for implementation actions to be updated and for sector reports to include progress reporting against implementation action milestone commitments, only five sectors updated their plans and reported on progress against implementation actions. DHS attributed this in part to the department not following up and working to ensure that all sector plans are fully developed and implemented in accordance with department guidance. The lack of complete updates and progress reports are further evidence that the sector planning process has not been effective and thus leaves the nation in the position of not knowing precisely where it stands in securing cyber critical infrastructures. Not following up to address these conditions also shows DHS is not making sector planning a priority. Further, recent studies by a presidential working group--which resulted in the President establishing the White House Office of Cybersecurity Coordinator--and an expert commission also identified shortfalls in the effectiveness of the current public-private partnership approach and related sector planning and offered options for improving the process. Such options include (1) prioritizing sectors to focus planning efforts on those with the most important cyber assets and (2) streamlining existing sectors to optimize their capacity to identify priorities and develop plans. Given this, it is essential that DHS and the to-be-appointed Cybersecurity Coordinator determine whether the current process as implemented should continue to be the national approach and thus worthy of further investment.
Billions of fasteners are used in safety-critical applications such as buildings, nuclear power plants, bridges, motor vehicles, airplanes, and other products or equipment each year. For example, an automobile may have as many as 3,000 fasteners. In 1988, the House Committee on Energy and Commerce’s Subcommittee on Oversight and Investigations issued a report on counterfeit and substandard fasteners that, along with hearings held by the House Science Committee, led to the enactment of FQA on November 16, 1990. The subcommittee reported that failures of substandard and often counterfeit fasteners may have been responsible for deaths and injuries, reduced defense readiness, and that they potentially threatened the safety of every American. According to the subcommittee report, the Defense Industrial Supply Center, which supplies fasteners to the armed services, found that its inventory contained over 30 million counterfeit fasteners and that Army depots contained another 2.6 million. Similarly, the National Aeronautics and Space Administration (NASA) reported that it found substandard fasteners in space shuttle equipment, and six of its fastener vendors were found to have inadequate quality-control systems. The Air Force likewise discovered substandard flight safety-critical aerospace fasteners in its inventory. FQA covers certain threaded, metallic, heat-treated fasteners of one- quarter inch diameter or greater for use in safety-critical applications. As originally enacted in 1990, FQA required manufacturers and importers to submit all lots of fasteners with significant safety applications to accredited laboratories for testing; established a laboratory accreditation program at the Commerce Department’s National Institute of Standards and Technology (NIST); required original test certificates to accompany the fasteners throughout the sale process; established requirements for manufacturers’ insignias to ensure traceability of fasteners to manufacturers and distributors; and provided for civil and criminal penalties for violations of the act. Since its passage, FQA has been amended several times. Concerns over the regulatory burden of FQA on aviation manufacturers led Congress, in August 1998, to amend the act to exempt certain fasteners approved by the Federal Aviation Administration for use in aircraft. The 1998 amendments also delayed implementation of NIST’s regulations for accrediting testing laboratories. FQA was amended again on June 8, 1999,to make it less burdensome: Fasteners that are part of an assembly or that are ordered for use as a spare, substitute, service, or replacement part in a package containing 75 or fewer parts at the time of sale or are contained in an assembly kit (i.e., the small-lot exemption) were exempted from coverage. Fasteners manufactured in a facility using quality-assurance systems were exempted from coverage. The amendment required accredited laboratory testing only of fasteners manufactured to consensus standards requiring testing, and postponed that requirement until June 2001. Companies were allowed to transmit and store electronically all records on fastener quality provided that reasonable means of authentication of the source of the document existed. The Commerce Department was required to establish and maintain a hotline for reporting alleged violations of the law. All credible allegations would then be forwarded to the Attorney General. The amendment also made it unlawful to knowingly misrepresent or falsify the fastener’s record of conformance or identification, characteristics, properties, mechanical or performance marks, chemistry, or strength. Although FQA does not mention Customs, Customs is authorized by 15 U.S.C. § 1125(b) to identify and detain imported goods marked or labeled with a false description or representation. Under this authority, Customs has conducted spot checks of imported fasteners since 1987 to determine if fasteners’ descriptions or representations are accurate. It has seven laboratories located around the country that provide scientific support to all Customs officers, other government agencies, and foreign governments as part of international assistance programs. Customs laboratories tested samples from randomly selected shipments of graded boltsimported from January through April 1998 in various sized lots and again in March and April 2001. These included one or more of the following tests: carbon, sulfur, phosphorous, alloying elements (chemical tests); or tensile strength and hardness (mechanical tests). Customs’ Chicago laboratory tested 66 randomly selected shipments of graded bolts (12 in small lots) imported during March and April 2001 and found that none were substandard. As discussed below, this is a decrease from results of tests that Customs did before December 1999. Customs’ laboratories also tested a random sample of 77 shipments of graded bolts imported in various sized lots from January 12 to April 12, 1998, and found three (not in small lots) to be substandard. The bolts failed either the tensile or hardness test and were imported through Chicago from Korea or Taiwan. On the basis of these sample results, the Customs study estimated that 5 percent of the 3,097 shipments of the same type of bolts that entered U.S. ports during the 3-month test period were substandard. In addition to testing graded fasteners imported in March and April 2001, Customs’ Chicago laboratory tested, at our request, samples of graded bolts from 15 small lotsthat DSCP had purchased between January 1998 and February 2001, and found that none were defective. Three lots were from contracts for purchases after December 1999and the remainder were before this time. According to a DSCP official, there is no way to determine if the contractors used foreign or domestic materials. Because of the small number of lots tested, the results, by themselves, cannot be used to make any conclusions about industry changes in manufacturing small lots. These results are, however, the best data available on fasteners that DSCP purchased in small lots. None of the 14 responses to our Federal Register notice stated that the fastener industry had changed any practices as a result of the small-lot exemption, as shown in the examples below. The Industrial Fasteners Institute and the National Fastener Distributors Association said they believe that there will be no evidence of significant changes in industry practice because most fasteners sold under the small- lot exemption are produced under quality-assurance systems and are therefore not subject to the act. They further stated that since fastener manufacturers can comply with the test requirements in the amended act in a cost-efficient manner, it is doubtful that industry members would attempt to avoid these costs by marketing fasteners in small-lot packages. The Canadian Fasteners Institute said that in the last decade, the fastener industry has made great advances and investments in product quality control and assurance. It said that the concern with the small-lot exemption stems from its potential for creating a public safety hazard and that the opportunity for the emergence of substandard products in commerce is too great a risk with the small-lot exemption in place. It suggested that, in lieu of any exemptions, FQA be amended to say that the manufacturer, distributor, or importer that sells fasteners as having certain mechanical and physical properties must be capable of substantiating those properties. That is, promises a seller makes to a buyer must be verifiable with objective evidence. The Alliance of Automobile Manufacturers and the Association of International Automobile Manufacturers (AIAM) said that their members produce virtually all the passenger cars and light trucks sold in the United States and use 300 billion fasteners annually. They reported that Congress exempted most automotive fasteners from FQA because strong incentives exist to enhance fastener quality, given the potential impact of faulty fasteners on customer satisfaction, product liability, and regulatory liability. They said that manufacturers have developed various measures, as follows, to assure the quality of the fasteners that they purchase: Proprietary standards—Vehicle manufacturers have developed their own fastener standards to assure that their fasteners are appropriate for specific applications. Quality-assurance systems—Vehicle manufacturers generally require that their fastener suppliers be certified under fastener quality-assurance systems to minimize the occurrence of nonconforming fasteners. Closed-loop acquisition—Vehicle manufacturers generally purchase their fasteners from approved suppliers to assure quality and accountability, and rarely purchase generic fasteners on the open market. The Alliance and AIAM said that they surveyed their members to obtain responses to the questions contained in our Federal Register notice. They said that the responses they received represented over 90 percent of U.S. light vehicle sales in calendar year 1999. None of the respondents reported any significant changes in procurement and packaging practices that involved a reduction in units per package to below 75 units, or an increase in the use of assembly kits as a means of complying with the FQA requirements through the small-lot exemption. The Alliance and AIAM said that on the basis of these survey results, virtually all of the fasteners produced to assemble or service members’ products are either manufactured to internal company proprietary standards or are produced under a qualifying fastener quality-assurance system, or both. As a result, they said much less than 1 percent of fasteners purchased are exempt from FQA solely through the small-lot exemption. These groups reported that the small-lot exemption still serves a very important purpose: to allow the continued availability, at an affordable price, of many spare-part fasteners required to service their members’ products in a safe manner. The majority of these small package/assembly kit fasteners are used to service older models that typically have very low annual sales of spare parts. Without this vital exemption, they report, the costs of such parts would become prohibitive, forcing their members to remove many of these products from the market. In such a case, they believe, the customer desiring to service his or her car would typically be forced to substitute the correct-specification fastener with a generic hardware store look-alike fastener, one that in all likelihood was manufactured to different specifications and uncertain quality standards. The Equipment Manufacturers Institute, an association of companies that manufacture agricultural, construction, forestry, materials-handling, and utility equipment, reported that its members want the small-lot exemption to remain in law. They are concerned that altering or removing it could result in burdensome paperwork and wasteful and unnecessary quality tests for fasteners that are commonly used for the off-road equipment industry. They said this would result in large nonvalue-added costs that would ultimately be borne by the consumer and reduce America’s global competitiveness and cost jobs. Additionally, they stated, fastener quality has not been a problem for its industry, and remains that way today. Other comments received included the following: The director of quality assurance at Huck Fasteners, Inc., said that he had surveyed his eight manufacturing facilities and found no changes in how fasteners are packaged as a result of FQA. A fastener manufacturer’s representative said that he had not seen any changes in industry practices as a result of the small-lot exemption, and that all the manufacturers and distributors he knows are in compliance. The president of Edward W. Daniel Co., a manufacturer of industrial lifting hardware and a member of the National Fastener Distributors Association, said that most manufacturers/importers of fasteners have developed quality programs and maintain the appropriate records for tracing the manufacturing materials used. None of the officials that we spoke with in DSCP or NASA reported any evidence of changes in fastener industry practices resulting from, or apparently resulting from, the small-lot exemption. DSCP officials reported that their agency requires prospective suppliers of fasteners to have a quality-assurance system. Likewise, officials from the Departments of Commerce and Justice, agencies that have specific responsibilities under FQA, stated that they did not have any evidence of changes in fastener industry practices. DSCP did not report any changes in industry practices. It operates a program that requires both manufacturers and distributors who want to sell to it to be prequalified. According to the agency Web site, applicants for the program must demonstrate their controls and established criteria to provide maximum assurance that the products procured conform to specifications. In addition, DSCP tests certain product lines, such as aerospace products, and randomly selects products for testing on a regular basis from its inventory. DSCP officials said that they manage approximately 1.2 million items, of which about 300,000 are fastener products and about 10 percent are covered under FQA. None of NASA’s nine centersreported any changes in industry practices as a result of the small-lot exemption. NIST officials responsible for FQA said that, as of March 31, 2001, they have not received any reports that the fastener industry has changed any practices as a result of the small-lot exemption. Similarly, officials from the Bureau of Export Administration reported that, as of March 30, 2001, their fraud hotline, which became operational on June 27, 2000, had not received any allegations that relate to the small-lot exemption. Officials at the Department of Justice said that the 1999 amendments to FQA were so new that neither its criminal nor civil divisions had any activity involving fasteners. Additionally, they said, they were not aware of any prosecutions or convictions involving fasteners sold in packages of 75 or fewer or in assembly kits since December 1999. We found no evidence that the fastener industry has changed any practices resulting from, or apparently resulting from, the small-lot exemption. We provided a draft of this report to the Secretary of Commerce, the Secretary of Treasury, and the Secretary of Defense for review and comment. In a June 4, 2001, letter, the Secretary of Commerce stated that the relevant bureaus of the Department of Commerce had reviewed the report and had no substantive comments (see app. III). Other Commerce staff provided technical comments on the draft report, which we incorporated as appropriate. In a May 23, 2001, memorandum, the Director, Office of Planning, U.S. Customs Service stated that he had no substantive comments to make (see app. IV). Other U.S. Customs staff provided technical comments on the draft report, which we also incorporated as appropriate. The Department of Defense provided comments, concurring in the report’s findings and providing technical comments on the draft report, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Commerce; the Secretary of the Treasury; the Secretary of Defense; and the Administrator, National Aeronautics and Space Administration. Copies will also be available at our Web site at www.gao.gov. Should you have any questions on matters contained in this report, please contact me at (202) 512-6240 or Alan Stapleton, Assistant Director, at (202) 512-3418. We can also be reached by e-mail at koontzl@gao.gov or stapletona@gao.gov, respectively. Other key contributors to this report included David Plocher and Theresa Roberson. As stated in FQA, our objective was to determine if there had been any changes in fastener industry practice “resulting from or apparently resulting from” the small-lot exemption in FQA. To achieve this objective, we compared the results of Customs’ mechanical and chemical tests of bolts imported during March and April 2001 with the results of similar testing performed by Customs for bolts imported from January through April 1998. These tests had several limitations. According to Customs officials, the document that an importer provides for each shipment of fasteners does not have to identify that the shipment contains packages of 75 or fewer fasteners (i.e., small lots) or that the fasteners are of a particular grade. Therefore, for both the 1998 and 2001 tests, Customs could not randomly select just those shipments containing small lots of grade 5 and grade 8 fasteners. Rather, the selection also included ungraded fasteners that were not sent to the laboratory for testing because, without the grade marking, Customs could not identify the test standards. For the 2001 test, Customs recorded when the package selected contained 75 or fewer graded bolts so we could compare their test results with those for packages containing more than 75 bolts. We observed Customs’ inspection of imported fasteners at Chicago’s O’Hare International Airport; we also visited Customs’ Chicago laboratory and observed its testing of some of the selected fasteners. Another limitation was that Customs designed both its 1998 and 2001 studies to only randomly select shipments valued at $2,500 or more so that resources were not spent on small, inconsequential shipments. However, problems during the 1998 study caused over 28 percent of the shipments selected to be valued at less than $2,500. These included 80 shipments valued at less than $500 and at least one valued at $1. Based on the price of grade 5 and grade 8 bolts, it is likely that some of the 80 shipments valued at less than $500 included in the 1998 test were in small lots. To address our objective, we also compared the results of Customs’ mechanical and chemical tests of fasteners DSCP purchased in small lots from January 1998 to December 1999 with the results of Customs’ mechanical and chemical tests of fasteners DSCP purchased from January 2000 to January 2001. We selected DSCP because of its problems in the 1980s with counterfeit fasteners. We asked DSCP to send the samples directly to Customs for testing. There were limitations in DSCP’s selection of the samples. DSCP officials initially identified 56 different contracts for small-lot purchases for potential testing, yet only 15 lots were ultimately tested. DSCP officials decided that 15 of the 56 contracts were ineligible for testing because the lot size was fewer than 25 bolts; thus, taking several bolts for testing could result in DSCP’s not being able to fill a customer’s order. Officials further said that 25 small-lot purchases were not tested because no inventory remained at the time the depots were asked to ship the bolts to Customs’ laboratory. Finally, one sample sent to Customs for testing was not traceable to a contract number, and so it was eliminated from the test results. To give the public an opportunity to report any changes in industry practices, we published a notice in the Federal Register on August 9, 2000 (F.R. 48714), and on our Web site, asking for comments no later than November 30, 2000. We also notified nearly 60 journals, newsletters, associations, and manufacturers of our Federal Register notice. As a result, several journals (e.g., Fastener Industry News and Wire Journal International) wrote articles about our study that often referred readers who wanted more information to our Federal Register notice or Web site. We also asked associations representing the fastener industry and the automobile industry to notify their memberships about our Federal Register notice and Web site notice. We asked officials at agencies that had experienced problems with fasteners in the past (DSCP and NASA) and NIST (with responsibilities under FQA) if they were aware of any changes in industry practices resulting from, or apparently resulting from, the FQA small-lot exemption. In addition, we asked officials at Commerce’s Bureau of Export Administration whether they had received any FQA allegations involving small lots of fasteners and officials in the Department of Justice about any allegations, investigations, prosecutions, or convictions involving fasteners sold in small lots or in assembly kits. We also attempted to compare the results of NASA’s tests of grade 8 fasteners purchased by its Langley Research Center before and after December 1999. However, there were too few mechanical and chemical tests completed to make this comparison possible. We conducted our review from January 2000 to May 2001, in accordance with generally accepted government auditing standards. We performed our work in Washington D.C., and Chicago, Illinois.
This report reviews changes in fastener industry practice "resulting from or apparently resulting from" the small-lot exemption of the Fastener Quality Act. GAO found no evidence that the fastener industry changed any practices resulting from, or apparently resulting from, the small-lot exemption. The Customs Service's limited tests of imported fasteners in 2001 found no evidence of substandard fasteners and no evidence of any decline in the quality of fasteners from the results of tests Customs conducted in 1998.
The 1945 Charter of the United Nations gives the UN Security Council primary responsibility for the maintenance of international peace and security. UN peacekeeping operations have traditionally been associated with Chapter VI of the charter, which outlines provisions for the peaceful settlement of disputes. However, in recent years, the Security Council has increasingly used Chapter VII to authorize the deployment of peacekeeping operations into volatile environments where the government of the host country is unable to maintain security and public order. Chapter VII allows the peacekeepers to take military and nonmilitary action to maintain or restore international peace and security. Chapter VIII authorizes regional organizations, such as the North Atlantic Treaty Organization (NATO) and the African Union (AU), to resolve disputes prior to intervention by the UN Security Council, so long as the activities of the regional organizations are consistent with UN principles. In this report, we differentiate between traditional and multidimensional mandates for peacekeeping operations. Traditional operations generally monitor or supervise cease-fire and other peace agreements between formerly warring parties. Their tasks can include monitoring of border demarcation, exchange of prisoners, and demobilization efforts. Multidimensional operations tend to go beyond traditional peace monitoring tasks by attempting to restore or create conditions more conducive to a lasting peace. On two occasions since 1998, the UN Security Council granted multidimensional operations the executive authority to direct and carry out the construction or reconstruction of political, legal, and economic institutions in Timor L’este and Kosovo. Multidimensional mandates generally include one or more of the following tasks in their mandates: Monitoring, supervising, training, or reconstructing police forces and otherwise supporting efforts to restore rule of law; monitoring, assisting, or instituting efforts to improve human rights; supporting, facilitating, coordinating, or safeguarding humanitarian relief monitoring, supporting, coordinating, or safeguarding assistance provided to help refugees or internally displaced persons return home and reintegrate into the society of the affected country or region; and conducting, supporting, or coordinating elections and other democracy- building efforts. In general, the United States has supported the expansion of UN peacekeeping operations as a useful, cost-effective way to influence situations affecting U.S. national interests without direct U.S. intervention. For example, in 2006, the United States voted for UN operations to ensure that Southern Lebanon was not used for hostile activities; to assist with the restoration and maintenance of the rule of law and public safety in Haiti; and to contribute to the protection of civilian populations and facilitate humanitarian activities in Darfur. These operations support U.S. national interests by carrying out mandates to help stabilize regions and promote international peace. The UN manages 16 peacekeeping operations worldwide as of September 2008, 6 of them in sub-Saharan Africa. Figure 1 shows the location of UN peacekeeping operations as of September 2008. The United States contributes the greatest share of funding for peacekeeping operations. All permanent members of the Security Council—China, France, Russia, the United Kingdom, and the United States—are charged a premium above their assessment rate for the regular budget (22 percent for the United States). For the 2008-2009 UN peacekeeping budget year, the UN assessed the United States about $2 billion according to a State official, or about 26 percent of the total UN peacekeeping budget. This represents an increase of over 700 percent in the budget since 1998 (see fig. 2). The U.S. government also makes significant voluntary contributions in support of countries providing UN peacekeeping forces. For example, State obligated about $110 million in fiscal year 2007 and 2008 funds for countries providing forces for the UN operation in Darfur. In addition, the United States had provided 308 troops, police, and military observers to six UN peacekeeping operations as of September 30, 2008. The extent and nature of U.S. support for UN peacekeeping is largely contained in Section 10 of the UN Participation Act of 1945. For example, it limits total U.S. contributions to 1,000 troops at any one time. It also limits the U.S. government to providing free of charge to the UN no more than $3 million worth of items or services—such as supplies, transportation assistance, or equipment—to each operation per year. UN guidelines call for DPKO to undertake planning and predeployment tasks before the approval of a UN Security Council mandate authorizing an operation. These include drawing up operations plans to address the expected mandate, estimated sector responsibilities, and force requirements. DPKO also assesses the availability of forces from potential contributors and then validates the estimates through visits of UN military and police officials to the host country and to troop and police contributing countries to assess unit readiness and availability. The Secretary General then issues a report on establishing the mission, including its size and resources. On the basis of the report, the Security Council may then pass a resolution authorizing the operation’s mandate and number of troops and police. According to U.S. officials, this is the maximum level allowed. Although the Security Council may authorize the mission’s mandate, its full budget must still be prepared and approved. In this process, the UN Department of Field Support prepares a draft budget and the UN Advisory Committee on Administrative and Budgetary Questions reviews it. According to the UN, considerable scrutiny of the proposed budget occurs during this process and there is debate among member states that pay the bulk of costs of the operation and the top troop contributors. The General Assembly then approves the budget for the amount agreed upon. UN guidelines note that the lead time required to deploy a mission depends on a number of factors, particularly the will of member states to contribute troops and police to a particular operation and the availability of financial and other resources due to long procurement lead times. For missions with highly complex mandates or difficult logistics, or where peacekeepers face significant security risk, it may take several weeks or even months to assemble and deploy the necessary elements. The UN has set a 90-day target for deploying the first elements of a multidimensional UN peacekeeping operation endorsed by the UN Security Council. Over the past decade, the UN has undertaken a number of assessments and initiatives to improve its peacekeeping organization, doctrine, planning, logistics and conditions of service for peacekeeping staff, as well as its efforts to establish a capacity to rapidly deploy peacekeepers. For example, the 2000 report of the Panel on United Nations Peace Operations, or Brahimi report, made recommendations to the Secretary General to improve the strategic direction, planning, organization, and conduct of peace operations. In response, the UN consolidated all peacekeeping responsibilities into DPKO, substantially increased its staff, and took steps to improve and integrate mission planning. Moreover, the Secretary General’s 2001 No Exit Without a Strategy noted that missions’ mandates should include elements such as institution building and the promotion of good governance and the rule of law to facilitate sustainable peace. The Peace Operations 2010 initiative announced by the Secretary General in 2006 focused on further reforms in the area of personnel, doctrine, partnerships, resources and organization. As a result, the UN took steps to strengthen its capacity to direct and support peacekeeping operations that included splitting DPKO into two departments in 2007 by creating the separate Department of Field Support; establishing integrated operations teams to integrate the daily direction and support of peacekeeping operations; and, in 2008, issuing a consolidated statement of peacekeeping operations, principles, and guidelines and a field guide to assist senior staff address critical mission startup tasks and challenges. GAO has reviewed the status of a number of UN reform initiatives, most recently the UN’s efforts to clarify lines of authority for field procurement between DPKO and DFS. Since 1998, UN peacekeeping operations have taken on more complex and ambitious mandates, taken place in increasingly challenging environments, and grown in size and scope. As shown in table 1, the operations have more mandated tasks and are increasingly authorized under Chapter VII of the UN charter to use all means necessary to carry out the mandate. The locations of the operations also are in less developed areas, as measured by the UN’s index of health, economic, and education levels, and the operations are deployed in some of the most politically unstable countries in the world. Finally, current operations with multidimensional mandates have an average of nearly 9 times as many troops, observers, and police as those in 1998, and more than 13 times as many civilian staff. Appendix V provides details on current UN peacekeeping operations. Appendix VI provides details on the military capabilities of UN peacekeeping operations as of November 2008. Since 1998, the United Nations has undertaken operations with broader and more complex mandates than before. The 16 operations in 1998 had mandates averaging three tasks or objectives each. The mandates of 10 of these operations were limited to such traditional peacekeeping tasks as monitoring cease-fire agreements and boundaries between formerly warring parties. The other 6 operations had a small number of tasks, which went beyond traditional peace monitoring by calling for the restoration or creation of conditions more conducive to a lasting peace. In September 2008, the UN also had 16 ongoing peacekeeping operations, but 11 had multidimensional mandates with political, security, social, and humanitarian objectives. Also, 15 of the 17 UN Peacekeeping Operations begun or augmented since 1998 were multidimensional missions. According to the November 2000 report by the Panel on United Nations Peace Operations, the mandated tasks of these operations reflected the more comprehensive approach to restoring security the UN had adopted as part of its ongoing efforts to improve the strategic direction and conduct of peace operations. This report noted that the effective protection of civilians and assistance in postconflict environments requires a coordinated strategy that goes beyond the political or military aspects of a conflict if the operation is to achieve a sustainable peace. We reported that since 1999 the UN has increasingly focused on a more comprehensive approach to making a transition from peacekeeping to a sustainable peace. Reflecting this trend, our analysis of the 17 UN operations since 1998 shows that operations averaged nine mandated tasks, with the most frequent tasks calling for the UN to monitor a peace or cease-fire agreement, use all means necessary to carry out the mandate (Chapter VII), help restore civil order with police support, train and develop the police force, support development of the rule of law, ensure human rights/women’s rights and protection, and support humanitarian assistance for internally displaced persons. Moreover, since 2006, the UN Security Council has mandated that peacekeeping operations include a responsibility to protect civilians from “genocide, war crimes, ethnic cleansing and crimes against humanity,” with force if necessary, when national authorities fail in this task. According to UN documents and officials, peacekeeping operations initiated after 1998 were deployed in less secure and more volatile postconflict situations. Since then, the Security Council has frequently deployed new operations into areas where the government of the host country was unable to maintain security and public order. For example, most of the UN operations ongoing as of September 2008 were deployed in locations that had among the highest levels of instability as measured by the World Bank’s index of political instability. Moreover, the Security Council has increasingly authorized peacekeepers to take all steps necessary to carry out their mandate, including the use of force, under Chapter VII of the UN Charter. In 1998, four UN missions operated under Chapter VII authority; in 2008, nine operated under explicit Chapter VII authority. UN operations currently are also being conducted in countries that are relatively less developed on average than the countries in which they were deployed a decade ago. This has increased the level of effort and resources needed to sustain peacekeeping operations, according to UN officials. In 1998, the average UN peacekeeping operation was deployed to a country with aggregate levels of knowledge, standard of living, and life expectancy that placed them in the medium category of development, as measured by the United Nations Development Program’s (UNDP) Human Development Index (HDI). Ten of the 17 operations initiated since 1998 were deployed to sub-Saharan Africa, of which 7 were in countries falling within the HDI’s lowest category of human development. As of September 2008, about 78,000, or 72 percent of the UN’s uniformed and civilian peacekeepers were in sub-Saharan Africa. As peacekeeping operations have taken on more ambitious mandates in challenging environments, the operations have become larger and more complex, with expanded troop deployments and sophisticated capabilities. Seven of the 11 ongoing multidimensional UN operations in 2008 had deployed from 7,000 to over 17,000 troops. In 1998, multidimensional operations averaged fewer than 1,000 troops and military observers. UN troops also are being deployed in larger and more capable units, according to UN officials. As of November 2008, the UN had approximately 76 battalion-sized infantry units deployed, including 21 mechanized infantry battalions. Most recent operations require major troop-contributing countries to deploy at least one 800-person infantry battalion with armored vehicles and supported by its own engineer and logistics units. A March 2008 UN report noted that the UN’s peacekeeping deployments included over 5,000 engineers, 24,000 vehicles, and 200 aircraft. Appendix VI provides more information on the military capabilities required by ongoing multidimensional UN peacekeeping operations as of November 2008. The UN also has deployed more police to peacekeeping operations over the past 10 years. In June 1998, the UN deployed 2,984 police, compared with 11,515 police deployed as of September 2008. The UN also has come to rely more heavily on formed police units (FPU), armed units of approximately 125 to 140 officers trained in crowd control and other specialized tasks and equipped with armored personnel carriers. These units, which are deployed to UN operations as cohesive units by contributing countries, were first utilized in small numbers in 2003 but now compose about 40 percent of all UN police deployed. FPUs are intended to perform three main functions—protection of UN facilities and personnel, provision of security support to national law enforcement agencies, and national police capacity building—and the increase in their use reflects the trend toward operations with more complex mandates taking place in less secure situations. In contrast, UN police are individually selected and deployed by the UN to monitor host nation police activities or supervise local police training. The increasingly large and complex operations also require larger civilian staffs with a diverse range of skill sets to execute the mandate and coordinate with other UN and international organizations. In 2000, the average multidimensional operation deployed about 125 international civilian staff; in 2008, the average rose to 445 international civilian staff. A global survey of international peacekeeping reported that as of October 2007, international UN civilian staff deployed on UN peacekeeping operations worked in 22 occupational groups, including administration, aviation, engineering, rule of law, security, and transportation. The task of sustaining and supplying operations launched since 1998 has grown increasingly complicated due to their larger size and deployment in less developed and more unstable environments. Under these circumstances, units need more equipment, use it more intensively, consume more fuel, and require more maintenance due to increased wear and tear . According to a senior UN official, such operations must bring in more international staff because skilled local personnel are scarce. They also must bring in more of their own food and water, and build their own roads, buildings, and accommodations from the ground up and then maintain them. The United Nations Organization Mission in the Democratic Republic of the Congo (MONUC) is an example of an operation that is heavily dependent upon aircraft to move and supply forces over a large area because the country lacks adequate roads. According to a July 2006 UN report, MONUC required 105 aircraft, distributed among 60 airports; maintenance of 150 landing sites; and aviation support staff of 1,600. This effort consumed 21 percent of MONUC’s total 2007-2008 budget, compared with an average of 11 percent for all UN peacekeeping operations. As a way to assess UN capacity, we developed a potential new peacekeeping operation to illustrate the detailed and likely resources the UN would need to deploy a new operation. Based on our analysis of the evolution of peacekeeping operations and UN planning scenarios, this operation would likely be large and complex and take place in sub- Saharan Africa. The potential new operation would be consistent with the mandates of the 17 operations launched since June 1998 and have nine security, political, and humanitarian tasks. Based on the most appropriate UN planning scenario, the potential new operation would likely require 21,000 troops and military observers and 1,500 police. We estimate that this operation would require 4,000 to 5,000 civilian staff, and UN officials noted that it would have logistical needs comparable to those of other large, complex operations in similar environments. Like other peacekeeping operations located in sub-Saharan Africa, the potential new mission likely would confront limited roads, other infrastructure, and water, thereby requiring greater logistical planning and support. Furthermore, according to the UN, in the majority of post-conflict scenarios, mine clearance is necessary to begin rehabilitating roads and other infrastructure. Our analysis is not intended to predict the size, scope, or location of a new UN peacekeeping operation. A new operation’s mandate and resource needs would be determined by the UN Security Council and the circumstances particular to the country to which the operation is deployed. Therefore, the requirements of a new operation could differ from those of the potential new operation presented here. The potential new operation would likely have a multidimensional mandate, with nine tasks related to security, political, and humanitarian efforts. The operation could be mandated to provide a secure environment, protect civilians and UN staff, monitor a cease-fire or peace agreement, and promote reconciliation. Political tasks could include supporting elections; helping establish rule of law and assisting in the reform of military, police, and corrections systems; and assisting in disarmament and demobilization of combatants. Humanitarian tasks could include monitoring human rights and developing the capability of the government. To derive these tasks for a potential new operation, we reviewed UN planning scenarios for a new operation in sub-Saharan Africa and selected the scenario that best matched our trend analysis of the 17 UN operations initiated or augmented since June 1998. The potential new operation likely would be located in sub-Saharan Africa because 10 of the 17 operations started or expanded since 1998 were deployed to the region. Like the areas of other peacekeeping operations in sub-Saharan Africa, the potential new mission’s area of operations would have limited infrastructure and utilities, lacking roads, buildings, and water, and would thus require increased logistical planning. Using the assumptions contained in the selected UN planning scenario, the potential new operation would be in a high-threat environment, political factions would recently have been fighting for control of the country, and there would be large numbers of internally displaced persons. As a precondition for deployment of the potential new operation, the UN would likely secure political and security agreements among the parties to the conflict and a clear statement of support from the host country for the deployment of a UN peacekeeping operation. To accomplish the political, security, and humanitarian tasks in the mandate, the potential new operation would require 21,000 troops and observers distributed among five sectors. Both combat capable and supporting units would be required, including troops with armored personnel carriers, engineers, truck transport companies, and medical, aviation, and logistics units. The force size would be derived from a threat assessment that would determine how the UN troops could ensure a safe and secure operating environment while protecting civilians and UN staff. According to UN planners, a potential new force would likely require units with the capability to deter threats from armed factions supported by international terrorist groups, which previous operations did not have to take into account to the same degree. The force would need special troops to detect and defeat the threat of improvised explosive devices and would need significant intelligence resources. The operation would be mandated to provide area security for an estimated 1.5 million internally displaced persons (IDP). Table 2 presents the composition of a potential new peacekeeping operation. The force’s operational units (14 infantry battalions and 1 mechanized battalion) would be distributed among five sectors. Each sector would contain all the civilian and uniformed components necessary to carry out the mandated tasks. Four of the sectors would require two battalions each. The infantry battalions in these sectors would be deployed in mobile company-sized groups to provide wide coverage by patrolling, establishing checkpoints, and enforcing buffer zones and demilitarized areas. The plan envisions a larger force of 5infantry battalions for the fifth sector, encompassing the capital city; these units would not require as many vehicles because much of their patrolling would be done on foot in urban areas. This sector would also maintain a mechanized battalion in reserve to serve as a rapid reaction force. The size of the helicopter force would be based on the need to provide aerial observation and firepower support 24 hours per day, 7 days per week, for all sectors, as well as the capability to transport infantry battalions and conduct search and rescue operations as needed. Many of the operational units would need to come from countries capable of providing supplies for the first 60 days after deployment, given the limitations of local infrastructure expected in this environment. The force would require five specialized logistics units to provide a number of base camp service and supply functions, five to six engineering companies, and four airfield support units to assist aviation operations. According to a UN planning scenario and UN officials, the potential new operation would likely require 1,500 police, including 700 officers in five FPUs. The police units will eventually assist with the reactivation of the potential new country’s police force; provide mentoring, skills training, and professional development assistance; advise on police reform and restructuring; and support capacity building and police oversight. However, as with the operation in Darfur, a large police force with a high profile would likely be needed to build confidence among the population. Furthermore, as in other UN operations, police officers must speak the official language (English), know how to operate four-wheel drive vehicles, and have about 5 years of police service and a background in country development activities. We estimated that the potential new operation would require 4,000 to 5,000 civilian staff, based on discussions with UN officials and analysis of UN planning documents. International staff of other complex UN operations ongoing in sub-Saharan Africa constitute between 20 and 30 percent of total civilian staff. According to UN officials, operations initially have a higher percentage of international staff. A more precise estimate of the number of civilians needed for the potential new operation would require detailed information, such as information about the skills available in the local labor market. The potential new operation’s international civilian staff would likely include the following: a special representative of the Secretary General; Assistant Secretary Generals, including the force commander; directors, including police commissioner, judicial affairs, political affairs, and civil affairs; professional staff for legal affairs, rule of law, judicial affairs, child protection, finance, and mission support functions (logistics and administration, finance, budget, human resources and management, procurement); and a substantial allocation of field service officers to provide technical/administrative support. In addition to international staff, the potential new operation would need national support staff and national professional officers. Furthermore, according to UN estimates, between 20 and 25 percent of the civilian force of the potential new operation could be needed to provide security for its civilian staff and facilities in the expected high-threat environment. UN officials could not provide an estimate of the logistical needs for the potential new operation without detailed planning in the field that precedes actual deployments. However, they stated that total logistical needs would likely be comparable to those of other large, complex operations, such as the operation in the Democratic Republic of the Congo or Darfur. For example, the potential new operation would likely need to establish and sustain camps and other facilities; manage major contracts for transport, food, fuel, water, and property and other services; and plan and coordinate the use of engineering, transportation, and other specialized assets provided by troop-contributing countries. The potential new operation, as with other sub-Saharan operations, would be dependent upon specialized military support units to meet its logistics needs. The potential new force would likely have to build roads, buildings, and other infrastructure and would be heavily dependent on helicopters and other relatively expensive aviation units for movement and supply. For example, as of June 2008, the operation in the Democratic Republic of the Congo (the Congo) allocated 21 percent of its annual budget on air operations, compared with a UN-wide average of 11 percent. The UN would likely face difficulty in obtaining troops, military observers, police, and civilians for the potential new operation. As of September 2008, the UN was about 18,000 troops and military observers below the level of about 95,000 authorized for current operations. In addition, several peacekeeping operations needed specialized military units, such as units for logistics, helicopters, and transport. There are a limited number of countries that provide troops and police with needed capabilities to meet current needs, and some potential contributors may be unwilling to provide forces for a new operation due to such political factors as their own national interests and the environmental and security situation in the host country. The UN also has a large vacancy rate for international civilians, and the UN is considering proposals to address the difficulty of obtaining and retaining international civilian staff. Figure 3 illustrates the authorized and deployed levels of troops, police, and civilians. Moreover, the UN would likely face the logistics challenges that have confronted other large UN operations in sub-Saharan Africa. UN officials and performance reports note that the difficulty of obtaining needed personnel and other resources has had an impact on the ability of ongoing operations to fully execute their mandates. As of September 2008, about 77,000 troops and military observers were deployed to existing UN peacekeeping operations, an overall gap of 18,000, or about 20 percent, below the authorized level of approximately 95,000. Of the 18,000, approximately 11,000 are attributable to the operation in Darfur. According to the State Department, the UN has secured pledges of troops to fill most of the authorized numbers for Darfur and the UN planned to deploy a majority of them by the end of the year. However, a UN report in October stated that the troop deployment would be delayed. The UN further reported that it had received no commitments from member states for some of the critical units required for the Darfur mission to become fully operational, including an aerial reconnaissance unit, transport units, a logistics support unit, and attack and transport helicopters with crews. Other operations have significant gaps between their deployed and authorized troop levels. For example, Lebanon has about 2,500 troops fewer than its authorized levels as of September 2008, and a UN report stated that the UN was seeking these troops from member countries. In addition to existing needs, a September 2008 UN report estimates that 6,000 troops will be needed, along with specialized units, for an augmented operation in Chad and the Central African Republic in the first quarter of 2009. However, the Secretary General requested the Security Council not to authorize the mission until the UN obtained firm troop commitments. The UN would confront three critical issues in obtaining needed military resources for a potential new mission in sub-Saharan Africa. First, a relatively small number of countries have demonstrated the willingness and ability to provide the UN with units of sufficient size and capability. As of November 2008, 120 nations provide troops or police to UN operations; however, only 30 countries provide at least 1 of the 76 battalion-sized infantry units these operations require. A standard UN infantry battalion has 800 troops; U.S. government officials note that countries generally must commit 2 additional battalions for every battalion currently deployed to ensure sufficient units are available for the rotation cycle, entailing a total commitment of 2,400 troops. As of November 2008, UN operations lacked 8 battalion-sized infantry units for Darfur. The potential new operation discussed in this report would likely increase the potential need by 15 battalions. A UN official indicated that the UN would approach its major contributors, such Bangladesh, India, and Pakistan, which have provided an increasingly large portion of total UN peacekeeping forces since 1998, if confronted with the challenge of staffing an operation similar to the potential new operation. Second, the potential new operation would require military logistics units, hospitals, military engineers, and military transport units. The UN relies on 37 countries to provide these specialized units in company strength or greater as of November 2008. The potential new operation would require 24 utility helicopters, 12 armed helicopters, and crew to fly them. However, according to US officials and UN documents, these types of units and resources are difficult to obtain and are currently being sought for existing operations. As of December 2008, the UN has been unable to obtain any of the 28 helicopters needed for the operation in Darfur, according to a State official. A UN official said it would be reasonable to assume an inability to obtain helicopters for the potential new operation. Third, member state decisions to provide troops for UN operations depend on factors such as the state’s national interest, the operation’s mandate, and the host country’s environment and security situation. For example, concerns over the security situation in Rwanda in 1994 resulted in member states not providing additional troops for the UN operation. Member states were unwilling to provide needed troops and reinforcements for operations in Bosnia and Somalia for similar reasons. The government hosting the UN operation also can impose political restrictions. For example, the government of Sudan insists that the UN force in Darfur be composed primarily of troops from African member states. This led to the withdrawal of an offer by Norway and Sweden to provide a needed joint engineering unit to the operation, a decision that the Secretary General noted undermined operations. The potential new operation would require deployment of 1,500 police— 800 individual UN police and 700 officers in five FPUs. However, as of September 2008, UN peacekeeping operations had a 34 percent gap between deployed and authorized levels of police. The total number of police authorized for all operations was 17,490, but the number deployed was 11,515. Moreover, the UN required 46 FPUs as of June 2008, but the UN had deployed only about 31 FPUs. The gap between deployed and authorized FPUs stems mainly from the lack of units for operations in Darfur. The UN encounters difficulties in obtaining qualified UN police with the special skills its operations may require. For example, according to a November 2007 Stimson Center report, some UN operations require experienced police officers capable of conducting criminal investigations or officers with supervisory or administrative skills. According to this report, unlike states contributing troops or FPUs, potential police contributing countries lack incentives because the UN does not reimburse them for their individual police contributions. In addition, a UN official noted that it is difficult to find police for the UN with the necessary skills because these officers are in demand in their home countries. Limited resources for recruiting individual UN police add to this difficulty. In contrast with its reliance on member states to contribute and deploy FPUs as a unit, the UN recruits and deploys UN police individually. A senior UN police official stated that this task is time-consuming; he noted that he reviews an estimated 700 applications to find 30 qualified police officers for an operation. Recruitment is the responsibility of the 34-strong Police Division of the UN’s Department of Peacekeeping Operations, which also helps deploy the police components for new UN operations, sends members of this staff to the field to help with start-up of new operations, and supports and assists police components of existing UN operations. According to a senior UN official, current staff levels are not adequate to support these functions and undertake all recruitment, and the UN should have three to four times the support personnel that currently reside in the division. UN police officials also noted that supporting an additional operation would be beyond their current capacity. However, a strategic review of the functions and structure of the Police Division, which will include an analysis of the adequacy of current resources levels, is ongoing. Obtaining needed FPUs required by its operations presents the UN with additional difficulties. These units, which are composed of law enforcement officers with expertise in crowd management and other policing tactical activities, require special training and equipment. For example, FPUs must undergo training in several areas before being eligible for deployment to a UN operation, including emergency medical services, use of nonlethal weapons and firearms, and crowd control and behavior. As of June 2008, only 11 countries provide full-sized FPUs to the UN, compared to the much larger number of countries that contribute UN police. According to a UN official, obtaining even one additional FPU is difficult. For example, it took a year to obtain an additional unit for the mission in Haiti. According to a conference report on international police issues co-sponsored by the U.S. government, if the UN plans to continue increasing its use of FPUs, this will require the development of a greater international capacity to deploy units that have been properly prepared for the demands of peacekeeping operations. The UN would likely need between 4,000 and 5,000 civilian staff for the potential new operation but would have difficulty obtaining these staff and retaining them once in place. Recruiting enough international civilian staff to fill the number of authorized positions in peacekeeping operations is difficult. From 2005 through early 2008, UN peacekeeping operations have had an average vacancy rate for international civilian staff of about 22 percent. As of April 2008, the vacancy rate for all civilian staff for the sub- Saharan operations in Chad/Central African Republic and Darfur was over 70 percent, and the vacancy rate for international civilian personnel in the adjoining UN operation in southern Sudan, was approximately 30 percent of its authorized level. Operations outside sub-Saharan Africa also have experienced high international civilian staff vacancy rates; the average vacancy rate for these operations ranged from 14 to 25 percent from 2005 through 2008. Some specialties are difficult to fill. In 2000, a UN report found critical shortfalls in key areas including procurement, finance, budget, logistics support, and engineers. In addition, a 2006 UN report found a 50 percent vacancy rate for procurement officers in the field. The UN also has difficulty retaining the existing civilian staff in peacekeeping operations. About 80 percent of international staff have appointments of 1 year or less, and the turnover rate in the field is approximately 30 percent. In addition, about half of professional staff serving in peacekeeping operations have 2 years or less experience. In September 2008, we reported that limited career development opportunities have contributed to the UN’s difficulties in attracting and retaining qualified field procurement staff. According to UN officials, turnover among field procurement staff has continued to hurt the continuity of their operations and peacekeeping missions continue to face challenges in deploying qualified, experienced procurement staff, especially during the critical start-up phase. The UN has identified several problems in obtaining and retaining civilian staff for peacekeeping. First, nearly all civilian staff deployed to UN operations hold appointments limited to specific missions or are on loan from other UN offices as temporary duty assignments. Most of these civilians cannot be redeployed from one mission to another in response to urgent needs at one of the operations, which limits the UN’s ability to launch new operations. Second, the UN has reported that the terms and conditions of service for civilians at field missions create inequities in the field. In March 2008, the UN reported that it has nine different types of employment contracts for field civilians, which set differing terms of service. Some operations do not offer the incentive of hardship pay. According to a UN official, it would be difficult to attract international staff and contractors to the potential new operation without better conditions of service. The UN has developed proposals to address these challenges. For example, in 2006, the UN Secretariat proposed establishing 2,500 career- track positions for expert and experienced technical civilian staff to serve in field missions. These staff would have the flexibility to move to different operations as needed. The UN Secretariat also proposed reducing the types of contracts offered to civilian staff and harmonizing conditions of service so that civilians serving in UN operations have similar benefits. As of September 2008, the UN was considering these proposals, according to a State official. The recent experiences of other UN operations in sub-Saharan Africa illustrate the challenges the potential new operation may face in terms of logistical requirements. First, it is likely that the UN will not be able to draw upon preexisting buildings for office space and staff accommodations. For example, UN planning standards assume that a host country could provide 40 percent of a new operation’s required accommodation space; however, a panel of UN officials from the Departments of Peacekeeping and Field Support stated that a host country in sub-Saharan Africa would likely be unable to provide any of the office space or accommodations needed. As a result, a new operation such as the potential new operation could face the task of constructing accommodations from the ground up for approximately 10,000 people in and around the force headquarters. Second, poor infrastructure conditions would likely hinder the activities of the potential new operation; UN officials noted that road conditions for the potential new operation could resemble those facing Darfur, Sudan, and the Democratic Republic of the Congo, where the poor or nonexistent road networks strained the UN’s ability to move people, goods, and equipment. According to UN reports, the roads in Darfur are especially poor, supplies take an average of 7 weeks to travel the 1,400 miles from port to operation, and banditry along the roads compounds the problem. As a result, according to a UN official, the potential new operation would likely require engineering units with substantial road-building capabilities for each sector, but as noted earlier, engineering units are difficult for the UN to obtain. According to the UN, the four month rainy season in the sub-Saharan region also complicates the challenge of supplying missions. Third, commercial opportunities for procuring goods and services will likely be limited given the potential new operation’s location in sub- Saharan Africa. Lack of local commercial opportunities has caused problems for other operations in the area. When items cannot be procured locally, they must be imported from abroad and sent to the operation, a fact that causes delays and compounds the burden on the operation’s transport assets. For example, the Darfur mission’s slow deployment is partially due to lack of capacity in the local market to meet the cargo transport requirements of the operation. These challenges also would likely delay the start-up of the potential new mission. As of September 2008, UN mission planning factors call for UN operations to begin with a rapid deployment phase in which the force would achieve an initial operational capability within 90 days of Security Council authorization. However, according to UN planning staff and documents, this objective is unrealistic. Operations in the Congo, Sudan, Darfur, and Chad required a substantial buildup of logistical military units before achieving initial operating capability. According to a UN report, arranging for the commitment and deployment of these units requires an expeditionary approach—the establishment and progressive buildup of the personnel, equipment, supplies, and infrastructure. One UN military planner estimated that arranging for and coordinating these complex logistical arrangements with existing UN planning resources added 6 months to the deployment process. The gaps between authorized and deployed levels of troops, police, and civilians—compounded by the logistics challenges—have affected ongoing operations. Some State and UN officials note that some gaps simply may be due to the time lag between securing and deploying forces. However, interviews with some officials from selected operations and our review of operation performance reports have demonstrated that the lack of troops, police, and civilians for existing operations has delayed or prevented some operations from undertaking mandated activities. The operation in Darfur, for example, has been unable to fully undertake many of its mandated activities, such as protection of civilians, due to a lack of military personnel. According to UN reports, lack of critical support units, such as helicopter, logistics support, and transport units has limited the operation’s ability to provide for its own protection, carry out its mandated tasks effectively, and transport equipment and supplies necessary to house and maintain the troops it has deployed so far. Moreover, the inability to secure these support units has delayed the deployment of some of the troops already committed to the operation for several months. The operation in Haiti lacked required levels of police, according to a UN official, and this lack decreased the support that could be provided to the Haitian National Police. Several operations have recently experienced civilian vacancies in key areas, affecting operation activities in the areas of public information, property management, medical services, and procurement. For example, officials at some missions stated that vacancies in procurement staff positions, particularly in supervisory positions, have impeded procurement actions and heightened the risk of errors. In general, according to a UN Secretary General report, the UN has not made progress in solving the problems with civilian staffing and the resulting high civilian vacancy rates have put the organization at managerial and financial risk. In addition, challenges in the areas of logistics have also had an impact on existing operations. Lack of local procurement opportunities required the operation in Haiti to procure most needed goods and services from outside the country, creating delays for the operation that are difficult to overcome. For example, it took the operation some time to find a suitable headquarters building, and it required outside resources to bring the building up to UN standards of safety and security. The U.S. government, along with those of other countries, has taken some steps to help address UN challenges in obtaining troops and police for peacekeeping operations, primarily through the Global Peace Operations Initiative. The United States has also provided logistics support to specific UN operations and is supports, in principle, UN proposals to address gaps between the number of authorized and deployed civilians. State is required to report to Congress on the status and effectiveness of peacekeeping operations and provides some of this information through its monthly briefings to Congress. However, State has not provided information about troop and other gaps between authorized and deployed force levels— important elements of status and effectiveness—in its notifications or annual UN report to Congress. The U.S. government, along with those of other countries, has provided some help to address UN challenges in obtaining peacekeeping troops, police, civilians, and logistics requirements through both GPOI and in response to specific UN mission needs. GPOI is a 5-year program begun by the U.S. government in 2004 in support of the Group of Eight (G8) countries’ action plan to build peacekeeping capabilities worldwide, with a focus on Africa. According to the State department, efforts are underway to extend this program’s activities beyond 2010. The key goals of the program are to train and, when appropriate, equip military peacekeepers and provide nonlethal equipment and transportation to support countries’ deployment of peacekeepers. In June 2008, we reported that as of April 2008 GPOI had provided training and material assistance to about 40,000 of the 75,000 peacekeeping troops it intends to train by 2010. Approximately 22,000 of these troops, predominantly African soldiers, have been deployed to 9 UN peacekeeping operations, one UN political mission, and 2 AU peacekeeping operations. We also reported that GPOI is unlikely to meet all of its goals and that State was unable to assess how effectively its instruction was improving the capacity of countries to provide and sustain peacekeepers. In addition, the United States has initiated actions to address mission-specific gaps. For example, State and DOD formed the Missing Assets Task Force to conduct a global search for 28 attack and transport helicopters, logistics units, and other assets for the operation in Darfur. As of December 2008, the task force was unable to obtain commitments for the helicopters. Through GPOI, the United States also supports efforts at the international Center of Excellence for Stability Police in Italy to increase the capabilities and interoperability of stability police to participate in peace operations. As of June 2008, the center had trained more than 1,300 of the 3,000 stability police instructors it intends to train by 2009. Moreover, State has allocated about $10 million for training and equipping FPUs deploying to Sudan. According to State and DOD officials, the United States has done little to help the UN address gaps between deployed and authorized civilian levels. According to State officials, the United States supports, in principle, UN internal efforts to address chronic gaps between civilian deployment and authorized staff levels by improving the terms of service for civilian peacekeeping staff, improving contracting arrangements and incentives for UN civilians, and developing a rapidly deployable standing civilian corps. However, a U.S. official noted in late September 2008 that these initiatives are still undergoing review by the UN and member states and the U.S. position on the final initiatives could be influenced by the projected costs and other factors. In commenting on a draft of this report, State wrote that it is supporting reforms in personnel policy that will mitigate the difficulty the UN is having in recruiting critical international staff. However, the UN comments on the draft stated that the general expression of U.S. support for the Secretary-General’s human resources management reform proposals is welcome, but is somewhat at dissonance with the position presented by the U.S. delegation to the fifth Committee of the General Assembly and on-going regular sixty-third sessions of the General Assembly. The UN stated that the U.S. delegation did not join the consensus reached by all other member states to streamline contractual arrangements, offer greater job security to staff in field missions, and improve their conditions of service. The UN also commented that at the regular sixty-third session of the General Assembly, the U.S. delegation proposed to significantly reduce allowances and benefits to new recruits and to staff to serve on temporary appointments in UN peacekeeping operations. The United States has helped the UN address logistical challenges both through GPOI and on a mission-specific basis. For example, GPOI supports an equipment depot in Sierra Leone that has provided nonlethal equipment to support the logistical training and deployment of African troops. State and DOD officials stated that they also have responded to specific logistics needs of UN operations. For example, State provided $110 million in fiscal year 2007 and 2008 funds to help troop-contributing nations deploy or sustain their forces in Darfur, including about $20 million worth of support to equip and deploy Rwandan troops as of September 2008. The U.S. government also responded to requests to provide transport and logistics assistance in 2006, resulting in the provision of additional support to help deploy troops from two countries to the UN operation in Lebanon. Federal law requires the President to report, to notify, and consult with Congress on UN peacekeeping operations. When the President submits his annual budget report to Congress, the law requires the President to provide Congress an annual report that assesses the effectiveness of ongoing international peacekeeping operations, their relevance to U.S. national interests, the projected termination dates for all such operations, and other matters. The law also requires that the President provide Congress written information about new operations that are anticipated to be authorized by the UN Security Council or existing operations where the authorized force strength is to be expanded or the mandate is to be changed significantly. The information to be provided is to include the anticipated duration, mandate, and command and control arrangements of such an operation, the total cost to both the UN and the United States, the planned exit strategy, and the vital national interest the new operation is to serve. The law also requires the President to consult monthly with Congress on peacekeeping. To comply with these requirements, State consults with Congress about peacekeeping through monthly briefings. At these briefings, State officials update Congress on the status of peacekeeping operations, such as progress being made in Darfur, the Congo, and Haiti, as well as the problems encountered, such as kidnappings in Port au Prince or incursions along the Chad-Sudan border discussed in the April 2008 monthly briefing. In some briefings, State provides updates on the progress in obtaining needed troops, police, and other resources. State also provides written notification to Congress about new peacekeeping operations that the United States expects to vote for in the Security Council and for operations where the mandate is significantly revised. For example, on August 30, 2006, State provided written notification to Congress that it had voted to approve the expansion of the UN operation in Lebanon, including increasing the troop level from about 2,000 to 15,000. Although they provide information about UN peacekeeping operations and their mandates, the annual reports to Congress and the notifications do not discuss potential successes or difficulties in obtaining the resources necessary to carry out the mandates. For example, between January 2006 and October 2008, the Congress received 17 notifications about new or expanded peacekeeping operations, including missions in Haiti, Timor L’este, Lebanon, Côte d’Ivoire, Sudan, Darfur, and others. All 17 provided information about the operations’ mandates, the forces authorized, the U.S. national interest served, and the exit strategy. None of the 17 reported on whether the UN had commitments for the troops, police, and the other resources required to carry out the mandate; whether there might be problems in obtaining them; or whether this information was known. Moreover, just 4 of 20 notifications regarding reprogramming of State Peacekeeping Operations funds in support of UN peacekeeping operations provided to Congress between January 2006 and September 2008 cited possible UN gaps in troops or equipment as part of the justification for this reprogramming. Furthermore, State’s 2006 and 2007 annual reports on peacekeeping included one sentence each on potential difficulties in attaining needed resources. This sentence stated that an ongoing challenge will be to ensure sufficient qualified troops for present and possible new missions. Information about the resources available to carry out the operations is not specified in the law. However, as this report has discussed, important elements of assessing the effectiveness, exit strategy, and mandate of operations would necessarily include a discussion of commitments made to provide the troops, police, and other resources needed to carry out the mandate; whether there would be problems in obtaining them; or whether this information is known. Through its peacekeeping operations, the UN is trying to build sustainable peace in some of the most unstable countries in the world. However, the UN has at times been unable to obtain the authorized level of resources, particularly specialized military units, police, and civilians. This has hindered some operations from fully carrying out their mandates. In some cases, these gaps reflect the inability of member states to provide the needed resources. However, the gaps between authorized and deployed levels of civilians, specialized military units, full battalion strength contingents, and formed police units pose challenges to current UN operations as well as to the UN in deploying another large multidimensional operation. The United States government, along with other member countries, is helping the UN address the resource challenges. However, gaps in needed resource levels for current operations still exist and State has not reported to Congress about this issue. Congress may lack the critical information it needs to assess the effectiveness of ongoing operations or the challenges the UN may face when considering or fielding proposed new UN peacekeeping operations. Congress needs this information when considering Administration requests for funding and support for UN peacekeeping operations. To ensure that Congress has the information needed to conduct oversight and fully consider Administration budget and other requests for UN peacekeeping, we recommend that the Secretary of State include in the department’s annual report or in another appropriate written submission to Congress information about UN resource challenges and gaps in obtaining and deploying troops, police, and civilians authorized to carry out peacekeeping operations. The information should include commitments to provide these resources, difficulties in obtaining them, and whether the gaps have impeded operations from carrying out their mandates. If the information is not available when an appropriate written submission is sent to Congress, we recommend that State ensure the information is provided, as available, during its consultations with Congress. The Department of State and the UN provided written comments on a draft of this report, which are reprinted in appendices III and IV. State commented that the report reflects a very thorough inquiry into the increase in and developing nature of international expectations of United Nations peacekeeping. State also commented that our recommendation should not specify in which reports to Congress the information on peacekeeping gaps should be included. Our draft recommendation specified that State should provide the information in annual reports to Congress and Congressional notifications. We agree that this may be too prescriptive but believe the information should be provided in writing; therefore, we modified our recommendation so as to allow the information be provided in appropriate written submissions to Congress. The UN commented that it fully concurred with the conclusions of our report and appreciated recognition that UN peacekeeping operations should be properly resourced and that mandates should be aligned with said resources. State and the UN also provided technical and general comments which we addressed in the report as appropriate. We are sending copies of this report to interested congressional committees, the Secretaries of State and Defense, and the United Nations. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8979 or christoffj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. Our review focused on four objectives related to the evolution of peacekeeping operations and the United Nations’ (UN) capacity to deploy new operations: Specifically, in this report, we examine (1) the evolution of UN peacekeeping operations in the past 10 years; (2) the characteristics of a potential new peacekeeping operation, given this evolution and UN planning scenarios; (3) the challenges, if any, the UN would face in deploying this potential new operation; and (4) U.S. efforts to support and report on UN peacekeeping. We analyzed the evolution of peacekeeping operations from 1998 to 2008. We chose this timeframe because it is the most recent 10-year time period and the period during which the UN initiated major peacekeeping reforms, such as the response to the Brahimi report. Also, during this time period, the UN articulated its approach and rationale to multi-dimensional peacekeeping. In the Secretary General’s report, No Exit without Strategy, the UN states that to facilitate sustainable peace a peacekeeping mission’s mandate should include elements such as institution building and the promotion of good governance and the rule of law. To analyze the evolution of UN peacekeeping operations from 1998-2008, we reviewed UN documents, including UN Security Council resolutions containing operation mandates; budget documents with information on resource requirements; and other UN reports. We also obtained UN data on troop, police, and civilian deployments and World Bank data on political instability. We analyzed the variation in troops, police, and civilians from 1998 to 2008 to analyze trends in mission size and scope. We analyzed the variation in civilian deployments from 2000 to 2008 as complete UN civilian data by operation was not made available for earlier periods. We categorized each mission as traditional or multidimensional, based on the number of mandated tasks and whether the mandated tasks were traditional, such as observing cease-fires or whether they were ambitious, such as helping restore government institutions. We met with UN officials in the Department of Peacekeeping Operations and the Department of Field Support to discuss changes in the nature of operations. We also reviewed previous GAO reports and used the distinction they had made between traditional and multidimensional operations. To illustrate the change in the types of countries where the UN launched peacekeeping operations in 1998 and 2008, we collected and analyzed data from the United Nations Development Program’s Human Development Index from within 5 years of the start date of each operation. To show the specialized capabilities and increased number of civilians required by recent operations, we used the 2008 Annual Review of Global Peace Operations conducted by the Center on International Cooperation’s Global Peace Operations program at the request of and with the support of the Best Practices Section of the UN Department of Peacekeeping Operations, augmented by UN operation deployment maps. To describe the stability of the countries in which peacekeeping operations are deployed in 2008, we used the World Bank’s Governance Matters. To determine the characteristics of a potential new peacekeeping operation, we used a combination of trend analysis and UN contingency planning documents. The trend analysis described in the preceding paragraph provided us with an average of nine mandated tasks. We then reviewed current UN contingency plans for a multidimensional operation that included these tasks and selected this plan to provide detailed requirements for the potential new operation. In developing requirements for a potential new operation, we worked with UN peacekeeping officials from several offices, including military planning, budget, logistics, civilian personnel, and police, to review the parameters of the operation. For further details on the potential new operation, see appendix II. To assess the challenges the UN would face in deploying the potential new operation, we reviewed a variety of UN documents, met with UN officials in New York, held teleconferences and interviews with UN officials deployed to operations, and met with State Department officials in Washington, D.C., and New York and DOD officials in Washington, D.C. Our analysis discusses challenges to deploying a potential large, multidimensional operation. It does not assess challenges to deploying a smaller, less capable operation. To determine the challenges the UN might face in obtaining troops, we analyzed UN data on troop contributions; consulted academic research on troop contribution patterns; spoke with various UN officials in the Department of Peacekeeping Operations, including officials in Force Generation Services; consulted a variety of UN reports, including Secretary General reports on particular operations; and reviewed past GAO reports. We assessed the gap between authorized forces and deployed forces by comparing current authorized UN force levels with monthly deployment data for troops, military observers and police up through September 2008. We assessed the number of infantry battalions and specialized units deployed by assessing the most current individual operation deployment maps available—ranging from March to October 2008. We reported the number of leased and contributed aircraft based on September 2007 data augmented with September 2008 data for the Darfur operation. To address challenges in the realm of obtaining police, we analyzed UN data on police contributions; met with officials in the Police Division of the Department of Peacekeeping Operations; consulted reports and studies completed by research institutions and training centers; and spoke with a UN official at the mission in Haiti. To assess challenges in recruiting and deploying civilians, we analyzed UN data on civilian vacancy rates by mission and position; spoke with UN officials in the Field Personnel Division of the Department of Field Support; and reviewed the large number of UN reports addressing civilian staffing issues that have been released between 2000 and 2008. To describe potential challenges in the realm of logistical requirements, we met with several UN officials in the Departments of Peacekeeping Operations and Field Support, including at a roundtable discussion of our potential new mission; reviewed UN reports on particular peacekeeping operations; and analyzed UN documents related to Strategic Deployment Stocks and the UN Logistics Base. We determined that data from the UN’s peacekeeper troop- and deployment- reporting systems are sufficiently reliable for the purposes of our report, which is to support findings concerning the challenges the UN may encounter when addressing the gaps between authorized and deployed levels of uniformed and civilian UN peacekeepers. To analyze U.S. efforts to help support UN peacekeeping, we reviewed U.S. reports on peacekeeping, including GAO reports and State Department budget submissions and reports on peacekeeping. We also obtained all notifications to Congress on reprogramming funds for peacekeeping from January 2006 through September 2008. There were a total of 77 notifications, 17 of which were to announce new or expanded peacekeeping operations. The others provided information on reprogramming funds in the Peacekeeping Operations Account. We analyzed these notifications for funding shifts and the information provided to Congress about the peacekeeping operations, such as operations’ mandates, exit strategies, U.S. national interests served, and gaps between the level of resources required and the level provided. We also obtained the annual 2006 and 2007 peacekeeping reports to Congress and reviewed them for the same issues. We compared our analysis of these documents with the reporting standards for peacekeeping under 22 U.S.C. § 287b. We conducted this performance audit from September 2007 to December 2008 in accordance with generally accepted government auditing standards. These standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To identify requirements for the potential new operation, we reviewed UN planning scenarios for one that provided a reasonable basis for a potential operation, as validated by (1) our analysis of trends in peacekeeping since 1998 and (2) our examination of the scenarios’ components. Our analysis is not intended to predict the size, scope, or location of a new UN peacekeeping operation. A new operation’s mandate and resource needs would be determined by the UN Security Council and the circumstances particular to the country to which it is deployed. Therefore, the requirements of a new operation would likely differ from those of the potential new operation presented here. We first examined the 17 operations deployed or enlarged since 1998 and identified 18 categories of tasks included in the mandates of one or more of these operations. We then determined these 17 operations had on average nine mandated tasks. To construct a possible mandate for our potential new peacekeeping operation reflecting these trends, we selected nine tasks from the list of 18 categories of tasks that most frequently appeared in the mandates of the previous 17 operations. These include restoring the rule of law and supporting elections (each included in the mandates of 11 of the 17 operations), and also restoration of government institutions (present in 10 of 17 mandates). We identified one UN planning scenario that was a close match to these trends. As table 3 shows, this planning scenario has nine mandated tasks that are consistent with the most common historical tasks since 1998. Seven of the tasks were similar or identical. Two tasks in the UN planning scenario—facilitating political agreements and supporting disarmament and demobilization—were not among the nine most common historical tasks, but were frequent tasks of the 17 operations since 1998. The UN planning scenario is located in sub-Saharan Africa. We validated that sub-Saharan Africa is the modal location for a potential operation. That is, 10 of the 17 operations deployed or expanded since 1998 were in this region. Also, 7 of the 11 operations deployed since 1998 and still ongoing are located in sub-Saharan Africa. We thus used this UN planning scenario as the basis for the potential new operation. This analysis acknowledges that the mandate, resource requirements, and location of a new UN operation would be contingent on actual events, and its characteristics may differ to an unknown extent from those presented in the UN planning scenarios used for this assessment. The UN planning scenario identified political and environmental conditions in the area of operation and specified the troop and police numbers for the operation. The assumptions in the UN planning scenario are that the government is weak, the location would lack roads and other infrastructure, UN troop contingents would operate in a high-threat environment, and the operations would function at a high tempo with active military patrols. We validated these as reasonable assumptions by (1) reviewing U.S. and UN reports about locations in sub-Saharan Africa, (2) reviewing UNDP reports on political instability and level of development in sub-Saharan Africa, and (3) interviewing UN officials who had surveyed the area. The UN planning scenario calls for 27,000 troops and military observers deployed in six locations in the country. The scenario also calls for specialized military units, such as logistics, transport, and aviation units. To validate whether this scenario was reasonable, we met with UN officials in the Department of Peacekeeping Operations Offices of Military Affairs, Police Affairs, Planning Service, Strategic Military Cell, Force Generation Services, and others. We discussed, in detail, the planning scenarios and the planning process to generate the scenarios, including the fact that some field survey work had been conducted. We obtained and reviewed documents on force requirements for similar operations, such as Darfur. We found that the requirements, such as the need for special military units, were consistent for these operations and the UN planning scenario. We reviewed the UN planning guidelines, the UN survey mission handbook, and lessons learned reports for procedures, requirements, and best practices for standards in planning operations. On the basis of this work, we validated as reasonable the deployment of 21,000 troops in five sectors for the potential new operation. As table 4 shows, we eliminated one sector from the potential new operation because it was primarily mandated to observe and monitor a cease-fire and thus this sector constituted an independent operation with a different mandate rather than part of the potential new operation. The UN planning scenario calls for 1,500 police, of which 700 would be deployed in five formed police units. We validated this as reasonable based on interviews and briefings with UN officials in the police division and our review of reports and data on UN police in peacekeeping operations. According to the UN officials, the estimate is based on their experience, a technical assessment mission, the population size, the tasks for the UN police, and the capacity of the local police. These officials also said that more information about the local police would be important in developing a more precise estimate of required police and formed police units. The UN scenario did not estimate the needed civilian staff. We estimated that the potential new operation would require 4,000 to 5,000 civilian staff, based on interviews and data provided by UN officials. UN officials noted that a lower bound estimate for a large operation would be about 3,000 civilian staff. However, these officials also stated that considering the potential new operation’s mandated tasks, force size, and security environment and comparisons with operations in the Congo, Darfur, and Sudan, a more reasonable estimate is 4,000 to 5,000 civilians. In comparison, the 2008 to 2009 proposed budget for the operation in the Congo had an authorized military component of 18,931 and an authorized civilian component of 4,934, 24 percent of whom were international civilians The proposed budget for the operation in Sudan had a military component of 10,715 and a civilian component of 4,260, 23 percent of whom were international civilians. The proposed budget for the Sudan operation had a military component of 25,507 and a civilian component of 5,557, 27 percent of whom were international civilians. The UN planning scenario did not estimate logistics requirements. In discussions with UN officials, they stated that due to the absence of detailed planning in the field, resource requirements for the potential new operation are difficult to calculate and infrastructure costs are unknowable at this time. These officials stated that the best estimate of logistics requirements and challenges would be from the experiences of other operations in sub-Saharan Africa, such as Sudan and the Democratic Republic of the Congo. In the 2007-2008 peacekeeping fiscal year, those operations had budgeted between about $420 million and $425 million for supplies, transport, and facilities. However, these operations have been close to full deployment levels for 2 or more years and the actual logistics requirements for a potential new force could be significantly less in the first year, depending upon rate of deployment for the troops, the resources required to achieve initial operational capability for each mandated task in each sector, whether sectors would be established simultaneously or in sequence, and many other factors. In Darfur, for example, less than 50 percent of authorized forces had been deployed as of October 2008, about 10 months after the start of the operation. In contrast, the augmented force in Lebanon deployed 70 percent of its authorized force level within the first 4 months. On the other hand, some logistics requirements, such as the transport in and establishment of facilities for the initial force, may be greater for a new operation in its first year in comparison with these mature operations, according to UN officials. Moreover, UN officials indicated that the equipment needs and initial logistics capabilities of individual infantry battalions would be comparable to those deployed to Darfur; they provided mission resource requirements for those units. For example, as in the case of Darfur, we found it reasonable to assume that many of the operational units for this potential new peacekeeping operation would need to come from countries capable of providing supplies for the first 60 to 90 days after deployment, given the limitations on local infrastructure expected in this environment. 1. We agree that the UN has conducted large peacekeeping operations prior to 1998. However, we selected the time period 1998 to 2008 for our review because it represents the most recent decade of growth in UN peacekeeping activities as well as major UN initiatives to reform peacekeeping operations. Most notably, this period reflects the implementation of the Brahimi peacekeeping reform efforts and the UN’s No Exit Without Strategy approach that the UN articulated in 2001. 2. We added information that describes UN peacekeeping reform efforts. 3. We have expanded our discussion of the process for establishing a peacekeeping operation. 4. We have reworded the sentence to reflect this comment. 5. We added this information to the report. 6. We agree and have noted the limitation in the report. 7. We added this information to the report. 8. We have reworded the section to reflect the UN’s comment. 9. We have substituted alternative language. 11. We added this information to the report. 12. We added information to the report to reflect the UN and U.S. positions on UN human resource reform policy. 13. We added information to the report to reflect the UN and U.S. positions on UN human resource reform policy. 14. We modified the text to delete the word “failure.” We already discuss UN field staff proposals in another section. The United Nations deployed approximately 109,000 personnel to 16 UN peacekeeping operations as of September 2008. Table 5 indicates the location, personnel distribution, and mandate type and size of each operation. UN peacekeeping operations have required increasingly large numbers of combat capable battalions, aircraft for both transport and combat support, and other support units. As of November 2008, 30 countries are providing 76 battalions of infantry peacekeeping troops, including 21 battalions of mechanized infantry. Twenty-five of these same countries also provide helicopters or support units in addition to infantry battalions; another 12 countries provide only helicopters or support units. Table 6 reflects the current number and type of operational battalions and support units company-sized or larger required by 9 of the 16 UN peacekeeping operations ongoing as of November 2008. The data for the UN operation in Darfur (UNAMID) includes units authorized but not yet deployed. Unit numbers and country of origin reflect deployment data reported by the individual UN operations between March and November 2008. In addition to the person named above, Tet Miyabara, Assistant Director; B. Patrick Hickey; Marisela Perez; Jennifer Young; Lynn Cothern; and David Dornisch made key contributions to this report. In addition, Ashley Alley, Jeremy Latimer, and Monica Brym provided technical assistance.
The United Nations (UN) supports U.S. interests in maintaining international security by deploying and operating 16 peacekeeping operations in locations in conflict, including Darfur, Lebanon, and Haiti. Over the past 10 years, the number of deployed UN personnel increased from about 41,000 peacekeepers and civilian staff to about 109,000 in 2008. In this report on the UN's capacity to deploy further operations, GAO was asked to examine (1) the evolution of UN peacekeeping operations in the past 10 years; (2) the likely characteristics of a potential new peacekeeping operation, given this evolution; (3) the challenges, if any, the UN would face deploying this operation; and (4) U.S. efforts to support and report on UN peacekeeping. GAO reviewed UN documents, developed a methodology to assess the requirements for a potential new operation with UN assistance, interviewed UN headquarters and mission officials, and assessed U.S. government documents on UN peacekeeping. UN peacekeeping operations since 1998 have taken on increasinglyambitious mandates, been located in more challenging environments, and grown in size and scope. UN operations in 1998 averaged three mandated tasks, such as observing cease-fires; in 2008, they averaged nine more ambitious tasks, such as restoring government institutions. Operations in 2008 were located in some of the world's most unstable countries, were larger and more complex than in 1998, and deployed thousands of civilians. Based on trends in peacekeeping and recent UN planning options, GAO analysis indicates that a potential new operation would likely be large and complex, take place in sub-Saharan Africa, and have nine mandated tasks. This potential new operation would likely require member states to contribute 21,000 troops and military observers, including those in engineering and aviation units, and 1,500 police to carry out the mandate. The UN would likely need to deploy 4,000 to 5,000 civilians. The operation's logistics needs also would be large and complex. The ability to fully deploy any potential new operation would likely face challenges, in view of current UN resource constraints. As of September 2008, ongoing UN operations had about a 20 percent gap between troops and military observers authorized to carry out operations and actual deployments. For police, the gap was about 34 percent; it was similar for civilians. Some gaps reflect UN difficulties in obtaining and deploying resources to carry out operations. Lack of these resources, such as special military units, prevented some operations from executing mandates. Lack of infrastructure in the potential new operation's environment would challenge the UN to provide logistical needs.
The federal government’s civilian real property holdings include hundreds of thousands of buildings and permanent structures across the country that cost billions of dollars annually to rent, operate, and maintain. Within this portfolio of government owned and leased assets, GSA plays the role of broker and property manager to many civilian agencies of the U.S. government. The Administrator of GSA is authorized by law to enter into lease agreements, not to exceed 20 years, on behalf of federal agencies. The administrator delegates leasing authority to GSA regional commissioners, who further delegate authority by issuing leasing warrants to lease contracting officers. GSA manages its inventory via 11 regional offices and its central office, located in Washington, D.C. While GSA’s Office of Portfolio Management is responsible for establishing the strategies and policies for GSA’s real property portfolio, its regional offices are generally responsible for conducting day-to-day real property management activities, including leasing, in their region. Federal management regulations specify that when seeking to acquire space for an agency, GSA is to first seek space in government-owned buildings and vacant space already under lease to the government. If suitable government-controlled space is unavailable, GSA is to acquire space in an efficient and cost-effective manner. As shown in figure 1, the square footage of property leased by GSA has steadily increased in recent years while the amount of federally owned space held by GSA has remained steady. The process for acquiring leased space, as outlined in GSA’s Public Building Service Leasing Desk Guide, begins when GSA receives a request for space from a federal agency. Using this guide—which provides guidance on implementing federal property regulations—GSA officials then work with an agency to fulfill the specific requirements for the space, including the square footage and any geographic limitations. According to GSA guidance, developing and finalizing these details should take anywhere from 2 to 8 months, depending on the complexity of a tenant agency’s space needs. After this initial stage, the Leasing Desk Guide estimates that approximately 18 to 24 months are needed to procure new leased space. During this time, GSA takes a number of steps to complete a lease acquisition (see fig. 2). During the lease acquisition process (‘leasing process’), GSA compiles and shares iterative estimates of the leasing costs with tenant agencies pursuing space. Prior to the advertisement step, GSA and each tenant agency involved sign a draft occupancy agreement detailing the estimated costs associated with a lease. At the conclusion of the process—when the actual costs of leasing a specific space are known, following the “build-out and acceptance” step—GSA and the agencies execute a final occupancy agreement associated with a specific lease, which allows agencies to budget for future payments. GSA is required to take additional action for prospectus leases—in 2014, those new leases with a net annual rent above $2.85 million. For these leases—also known as ‘high-value’ leases—GSA must submit a prospectus, or proposal, to the House and Senate authorizing committees for their review and approval. Given this additional requirement, GSA’s Leasing Desk Guide suggests the lease acquisition process for high- value leases begin 3 to 5 years prior to lease expiration. The prospectus should include the purpose and location of the lease, as well as basic information about the space to be leased including the location, an estimate of the maximum cost to the government of the space, and a statement of rent currently being paid by the government for federal agencies to be housed in the space. This information assists Congress in overseeing GSA’s management of its real property portfolio. Typically, these prospectuses are drafted in the GSA regional offices and reviewed and approved by GSA’s Office of Portfolio Management. The prospectuses are then reviewed and approved by the Office of Management and Budget prior to being provided to congressional authorizing committees—the Senate Committee on Environment and Public Works and the House Committee on Transportation and Infrastructure. In 2013, we reported that there were 218 active GSA high- value leases, which accounted for about one-third of GSA’s net annual rent costs; GSA’s overall lease inventory included more than 8,300 leases as of August 2015, 4 percent of which had current annual rents above the $2.85 million threshold. Once GSA executes a lease on behalf of a tenant agency and an occupancy agreement with the agency, that agency is required to pay rent to GSA for the space they occupy. Rent payments are deposited into the Federal Buildings Fund (FBF), which is a fund established by the Public Buildings Act Amendments of 1972. Congress provides annual limits on the amount GSA may obligate to provide a range of real property services. As of February 2015, the FBF had an unobligated balance of $3.6 billion. Included in federal agencies’ monthly rent is a monthly fee to GSA for its services related to leased space; as of 2015, tenants paid 5 or 7 percent of their lease value in a fee to GSA based on the level of flexibility the agency had in canceling the agreement. Depending on the extent to which leased space must be altered for an agency to fulfill its mission, the costs of the improvements necessary (“tenant improvements”) are paid by the tenant to the lessor through GSA. Tenant improvement costs include changes to walls, electrical outlets, telephone lines, and secure rooms that need to be made by the lessor between the time that GSA executes the lease and the point when the tenant agency takes occupancy. GSA officials said that it is standard practice for tenant agencies to amortize these costs over the lease term and noted that this approach is similarly utilized in the private sector. GSA assigns each tenant to one of 6 tiers, which equate to the standard improvements that tend to be required to prepare a space to support the mission and activities of a particular agency. In late 2009, GSA began to reform its leasing by simplifying the lease acquisition process, among other changes. For example, for leases below $150,000 in annual rent, GSA introduced a ‘simplified lease’ model to allow a more efficient way to process documents customized for lower- value leases. GSA included this and other reforms in its Leasing Desk Guide of policies and procedures, which was introduced in April 2011. While GSA’s fiscal year 2015 budget request stated that one of the agency’s strategic objectives is to procure leased space on behalf of federal tenants at or below market rates, our analysis of recent GSA office leases across all regions performed for this report found that about half of the rates negotiated in recent years exceeded market rates at the time the leases were executed. Although GSA analyzes its lease rates against market rates over the full term of the lease, we chose to limit our assessment to the point in time the leases were executed, as this is the moment at which actual market rates were known. In doing so, the review of 714 new GSA leases finalized between 2008 and 2014 found that about half exceeded their local market’s average rate for similar space by 10 percent or more. This phenomenon varied across the 11 GSA regions, with some regions performing better than others. In general, three GSA regions were in line with market values for rents, five GSA regions executed leases that were on average at or below market, and three GSA regions had rates that exceeded local market rates on average. Specifically, figure 3 shows how the rates negotiated by GSA regional offices compared to relevant market rates in the years the leases were executed. GSA officials across all regions stated that they utilize a number of tools to establish a range of market rental rates for each lease, including a report specifically tailored for each transaction with market information, analysis, and insight regarding the relevant location. GSA could more consistently achieve market rates or better if there were more competition for its leases. The Federal Management Regulation requires that federal agencies acquire leased space at rates consistent with prevailing market rates through full and open competition. As we reported in 1995, this is designed to ensure that all responsible sources are allowed to compete and serves as the government’s primary price control mechanism. At a June 2015 hearing, a top GSA manager stated that GSA’s ongoing lease reform effort includes plans to reduce costs by increasing competition. However, according to our interviews with officials across all GSA regions as well as private sector stakeholders, competition among private lessors is currently limited by the following factors: Restricted geographic area: GSA regional officials said that tenant requirements for the location of a leased space reduced the number of buildings that qualify, thereby limiting competition for GSA leases. In some cases, an agency’s requested geographic area may be so restricted that it does not include a single building that meets all the tenant’s requirements. For example, for one lease we reviewed, a tenant agency was forced to twice widen the geographic areas it initially requested in order to find the space ultimately leased. GSA officials from this region said that these starts and stops necessitated by the narrow geographic area requested by the tenant resulted in repeating steps of the leasing process, increasing related costs and the overall schedule. Although officials from GSA’s central office told us that GSA has not typically questioned the appropriateness of an agency’s location limitations, GSA regional officials said that broadening the area deemed suitable for potential properties—while still ensuring that the agency can pursue its mission—is one of the best ways to increase competition for a lease. In September 2015, an internal GSA memo to all regions detailed a renewed policy through which GSA is taking the initiative to suggest geographic areas to tenant agencies. This memo states that GSA will consult with regional officials on several factors in order to designate geographic areas and, when agencies submit future space requests to GSA, this policy will help to enhance competition. Specialized building requirements: GSA regional officials said that competition for GSA leases is further limited by federal agencies’ specific building requirements. These requirements—including things such as parking spaces, ceiling heights, or security setbacks—affect the number of qualifying properties left in a position to bid for the GSA lease. GSA regional officials said some property owners do not want to make the investment in their building to meet a tenant agency’s security requirements when they can choose to lease their space to a non-federal tenant. In the previously mentioned case wherein a tenant agency was forced to twice widen the geographic areas it initially requested to find space, the tenant agency’s space request included elevators and a large square amount of footage on the first floor, which played a key role. GSA could not meet these requirements in the original geographic area requested, and alternatives including strip malls or warehouses were not viable for the agency. As a result, GSA had to look more broadly across the market to identify a qualifying space that met all of the agency’s requirements. Presence of unique clauses: Private sector leasing representatives and regional GSA officials also said that clauses in GSA leases not found in private industry leases make them less competitive. For example, private sector officials we spoke with told us that the substitution clause—which gives GSA the unilateral right to substitute any other tenant for the original intended tenant—deters some landlords that do not want to risk the possibility that the replacement agencies may not be compatible with the existing tenants. At a workshop hosted by GSA in June 2015, GSA officials said that they were considering changes to the substitution clause, which they said causes risk with lenders and financiers. However, in October 2015, GSA officials told us that they would not consider changes to the clause without proof that it increases costs or harms building owners. Regardless, the flexibility which such a clause is designed to offer GSA and its federal tenants is not regularly exercised, according to GSA data. Further, officials from one GSA regional office explained that, while compliance with state laws also exists in a private sector environment, they also have to comply with policies specific to federal buildings. Other clauses mentioned by GSA regional officials as being unique to GSA leases include federal energy efficiency and conservation clauses; private sector representatives also cited GSA’s fire and casualty clause, which immediately terminates a GSA lease if a building in which space is located is totally destroyed by fire or other casualty. The presence of these additional requirements could also make renting to private sector tenants more desirable for some property owners. Lengthy leasing process: The lengthy GSA leasing process—when compared to the private sector—also has the potential to reduce competition for GSA leases. From the date of GSA’s initial cost estimate to the point when an agency took occupancy, the 11 GSA leases we reviewed took, on average, almost 4 years to complete, with some taking as long as 6 to 8 years. GSA took more than 2 years to complete the leasing process for 10 of the 11 leases we reviewed, and building owners receive no rent during this protracted process, causing some owners drop out of the competition and others to likely choose not to bid knowing the long time frames involved. Officials from GSA headquarters told us that it is important to note that the amount of time to execute a lease is only one of several variables for assessing GSA performance and shorter lease execution timeframes are not always better; it could, for example, reflect a rushed, poorly planned process. However, the extent of GSA’s administrative obligations and paperwork can cause delays. For example, a regional GSA official said that the time required for federal tenants to approve architectural drawings often exceeds that for private sector tenants and, while some more sophisticated lessors will include the costs of these delays in their price, this can deter some potential lessors from even submitting a bid. Officials from GSA’s central office said that although GSA’s goal is to meet or beat private sector leasing rates, federal leases are different than private sector leases and some of the differences can make it difficult to compete with private sector leasing rates. Specifically, these officials stated that: GSA must procure leases based on specific award factors governed by federal law, a process that can discourage competition for these leases or cause lessors to price their risk accordingly; decisions related to private sector leases are more flexible and could be based on individual preferences alone, thus making them less cumbersome and more desirable to some property owners. For a federal lease, a property owner must sign a GSA-drafted lease document and abide by its requirements; conversely, the contractual instrument used in private sector lease is provided by the owner and crafted according to its terms. While a private sector tenant generally must leave a property when a lease expires, federal agencies often continue to occupy leased space after the expiration of a lease term, often in holdover status without the contractual right to occupy the space if the government and lessor are working out details or disagree. GSA officials told us that as a result of these differences, federal leases are often more complex than private sector leases for some property owners, and the pricing of that complexity and business risk for private owners can subsequently translate into greater expense for federal agencies. GSA officials said that increasing the term of GSA leases is a key part of GSA’s efforts to reduce leasing costs, but our analysis found that longer terms do not necessarily lead to lower costs in the first year of leases. Based on our analysis of the agency’s data, GSA typically negotiates relatively short-term leases—that is, those with guaranteed terms fewer than 5 years in length. While GSA considers 80 percent of the 4,258 leases we reviewed to be 10-year leases or longer, many of these leases have a 5-year guaranteed (“firm”) term followed by an optional (“soft”) term. The private sector views leases structured in this way as 5-year leases because that is the only part that is guaranteed. Figure 4 shows that 70 percent of the new GSA leases we analyzed, finalized from 2008 through 2014, had firm terms of 5 years or less. Conventional wisdom— according to both GSA officials and private sector real estate professionals—holds that shorter term leases are typically more costly. However, based on our sample, it is not clear that GSA leases with shorter firm terms actually do cost more than those with longer firm terms. In the last year, GSA has been encouraging agencies to obtain longer leases with a 10-year firm term where appropriate and, in June 2015, a GSA manager testified that GSA plans to extend lease terms to 10 years or longer in order to reduce costs. However, our analysis of new GSA leases executed on behalf of federal agencies from 2008 through 2014 found no direct financial benefit based on the length of the firm term of a lease in the base year that the leases were signed. This lack of cost savings is attributable, in part, to greater tenant improvement costs that are more prevalent in longer-term leases. While it may not reduce costs directly, increasing the number of leases with longer firm terms, as GSA plans to do, could offer other benefits to GSA. While GSA officials in the regions said that leases with firm terms of 5 years or less provide flexibility to tenant agencies that may not need the space for long periods, few agencies take advantage of the flexibility of the 5-year lease. The average length of time that federal agencies remained in space leased through GSA was more than 23 years— possibly through multiple occupancy agreements—for the GSA leases that expired between 2001 and 2014. In addition, given GSA’s lengthy leasing process, short-term leases cause challenges for GSA and tenant agencies. For the 11 leases we reviewed in depth, the average time needed to complete a GSA lease was nearly 4 years (an average of 3 years for standard leases and more than 5 years for high-value leases), ranging from more than one year to more than 8 years. Further, this process may soon take longer: GSA officials said one aim of GSA’s ongoing lease reform is to begin the process even earlier—at least 36 months before the expiration date for standard leases. This would mean beginning the process shortly after the start of an agency’s initial occupancy of a space for a 5-year lease. As stated earlier, new leases often involve costs related to the customization of the space known as tenant improvements, which are usually amortized over the term of the lease. Tenant agencies can fund these costs in two ways: (1) pay for the improvements at the outset, prior to moving into the space, when negotiations between GSA and a property owner permit or (2) amortize the costs of the improvements over time during the lease financed by the building owner. GSA regional officials said that nearly all tenants choose to amortize their basic tenant improvements over the firm term of the lease, and the analysis of GSA leases performed for this report supports this assertion. Nearly 60 percent of leases in our full data set of 4,285 leases involved tenant improvement costs—all of which opted to amortize at least some of these costs over a period during the lease. Both GSA and commercial real estate firms tend to amortize the costs needed to prepare a leased space for tenant occupancy over the firm term of the lease and ask landlords to assume the risk—GSA doing so on behalf of its tenant agencies—of customizing a space according to specific requirements. Because private owners that lease to the federal government assume this responsibility and obtain the resources required to construct, operate, and maintain real property over the course of its lifecycle, federal agencies then pay private sector interest rates as they pay for their improvements over the firm term of their GSA lease. The overall cost of leasing office space increases considerably when agencies opt to amortize their tenant improvement costs over time instead of paying them at the outset. When agencies amortize their tenant improvements during their lease, they pay substantial sums to private lessors in the form of interest based on the rates GSA negotiates with private lessors on agencies’ behalf. In this approach, tenant agencies pay not only the sum of the principal and interest, but also additional GSA fees—either 5 percent if they are in a non-cancellable occupancy with GSA or 7 percent if they are in a cancellable agreement—typically over the firm term of the lease. Nine of the 11 leases we reviewed had tenant improvement costs and more than one-third of the costs related to these improvements were directed toward interest, as all 9 cases amortized these costs. These 9 leases incurred an average of $1.7 million in interest costs related to tenant improvements. In total, these 9 leases incurred a total of $39 million in tenant improvement costs, of which nearly 40 percent ($15 million) was due to interest paid to private lessors. For example, in one lease we reviewed, the tenant agency chose to amortize its $2.1 million of tenant improvement costs over the life of a 15-year lease at a 9 percent interest rate, which will ultimately cost $4.0 million after including both the $1.7 million to be paid in interest charges and GSA’s 5 percent fee on those charges. The agency could have saved 45 percent, more than $1.8 million, over the term of its GSA lease if these costs had been paid at the outset. Additional examples from our analysis are illustrated in figure 5. Although agencies typically lack the resources to fund improvements at the outset of a lease according to GSA officials, there may be opportunities to reduce overall federal leasing costs by identifying funds to reduce the amount of interest paid to private lessors. The Federal Management Regulation states that the basic real estate acquisition policy is to acquire real estate in an efficient and cost-effective manner. We have previously reported that lack of capital to finance real property investments, including tenant improvements, has been a long-standing challenge for GSA and other federal agencies. However, identifying sources of capital to fund tenant improvement costs at the outset would reduce federal agencies’ leasing costs. One possible option to reduce the costs paid by tenant agencies could be to provide budget authority for GSA to finance the capital needed for tenant improvements to be paid at the outset of a new lease and have the tenant pay it back over the term, without the interest charges a tenant agency currently pays. For example, it is possible that GSA could use available balances from the Federal Buildings Fund (FBF) to fund tenant improvement costs, with sufficient controls in place, at the outset of a lease. The FBF is administered by GSA and was established in 1972 as the primary source of funds for operating and capital costs associated with federal space. GSA collects rent from tenant agencies, deposits it into the FBF, and uses that money—as authorized by Congress—to fund real property acquisition, operation, maintenance, and disposal. The FBF has contained unobligated balances for several years and, as of February 2015, the fund had an unobligated balance of $3.6 billion. However, GSA does not currently have the budget authority to use the unobligated balances in the FBF to fund tenant improvements. GSA officials said that the concept of funding agencies’ tenant improvements using unobligated FBF balances has potential, but also said that GSA has not formally considered this approach. They said that applying unobligated balances in this way has the potential to save substantial amounts money on interest charges that are currently passed onto federal tenants, but that the risks and opportunities would need to be fully studied. GSA also requires most tenants to sign cancelable occupancy agreements, which can also increase federal leasing costs for agencies and may not be needed. GSA regularly requires tenant agencies to sign cancelable occupancy agreements that allow them to vacate the leased property under certain circumstances. Non-cancelable tenant agreements require the tenant agency to pay rent on the leased property for the entire firm-term of the lease. GSA charges more in administrative fees (7 percent of total rent instead of 5 percent) for cancelable tenant agreements to account for the higher risk of having to replace a tenant before the end of a lease. However, according to GSA’s Pricing Desk Guide, GSA does not allow agencies to decide whether the cancelable agreement warrants the higher fee; rather, GSA reviews each leased space and determines whether to designate its agreement as cancelable or non-cancelable. Officials from GSA headquarters told us that their regional officials determine whether or not an agency’s agreement should be non-cancellable and, further, these regional officials do so based on whether they think they will be able to find a replacement tenant, not on the tenant’s likelihood of canceling. For example, GSA officials hypothesized that it may be difficult to find a replacement tenant for a Transportation Security Administration leased space located beyond the security line at an airport, thus the tenant’s agreement with GSA would likely be non-cancelable. However, the importance of routinely including the right to cancel in short- term leases is questionable and we believe that tenant agencies are in the best position to decide how to best meet their consolidation objectives. As mentioned earlier, the Federal Management Regulation states that the basic real estate acquisition policy is to acquire real estate in an efficient and cost-effective manner. GSA officials said that most agencies agree to pay the additional 2 percent in management fees in exchange for the flexibility that it gives them to have a cancelable agreement. However, officials from two GSA tenant agencies we interviewed said that this built-in flexibility can be useful, but they also said that they rarely exercise the right to cancel their agreements with GSA. Officials at GSA headquarters also said that tenants rarely exercise their right to cancel their occupancy agreements; as one official at a GSA tenant agency explained, this is often because their agency usually considers its space requirements when they are already nearing lease expiration. Moreover, according to GSA, 83 percent of GSA leases were connected to at least one cancelable agreement as of July 2015 while about 70 percent of the more than 4,200 GSA leases considered in our broader analysis have a firm term of 5 years or fewer. Based on GSA’s policy of starting the leasing process from 18 months (standard) to 5 years (high-value) before a lease expires, an agency seeking to cancel its agreement would need to spend a substantial portion of a 5 year period working to arrange their move to a different space, reducing the likelihood that they would cancel early. The actual costs of selected standard leases we reviewed generally exceeded GSA’s initial cost estimates. The reasons for these overruns, discussed later in this section, included lack of competition for GSA leases and changes in agencies’ space needs during the leasing process. As shown in the figure, six of the seven standard leases we reviewed exceeded their initial rent per square foot cost estimates and four of these exceeded their estimates by more than 10 percent; overruns ranged from 6 percent to 90 percent greater than the estimates GSA provided to its tenant agencies. GSA’s cost estimates for standard leases have limited oversight mechanisms. Regional GSA officials indicated that the primary requirement for increasing the estimated leasing costs is updating the occupancy agreement with the tenant agency. Officials from the two GSA tenant agencies we interviewed said that GSA’s initial estimates often vary from the final cost; officials from one of these agencies told us that GSA’s early cost estimates are usually significantly lower than true market leasing rates. This can complicate agencies’ ability to effectively plan their budgets and identify other sources of funding after making decisions based on GSA’s initial leasing cost estimates. However, the officials also said that they are unable to pursue other options, as they do not have independent leasing authority to procure similar leases without the assistance of GSA. Among the 11 GSA leases we reviewed in-depth, GSA’s initial cost estimates were more accurate for high-value leases that require a prospectus subject to congressional authorization than the cost estimates were for standard leases. In fact, as figure 7 shows, agencies’ actual leasing costs per square foot were within 10 percent of GSA’s initial estimates for 3 of the 4 high-value leases that we reviewed. GSA regional officials said that the prospectus process required for high- value leases requires congressional authorization, which increases their accountability for developing accurate initial estimates because if actual costs were to substantially exceed the estimated costs approved in the prospectus, GSA would have to obtain additional budget authority from Congress. The actual costs of three of the four high-value leases we reviewed were slightly below GSA’s estimates. While the final rental rate for one of the high-value leases exceeded GSA’s per square foot estimates by about 20 percent, GSA kept the total cost of the project within authorized costs by reducing the overall amount of space it leased. Nonetheless, requests for additional budget authority from Congress for high-value leases remain rare. GSA officials said that the agency does not track the number of amendments made to authorized high-value leases or how many have been resubmitted to Congress since 2000, only noting that it has rarely, if ever, happened. Both GSA officials and these results suggest requiring congressional authorization for high-value leases may help control cost growth for these leases. The congressional approval requirement may be more appropriate for high-value leases because of the larger dollar amounts involved. Although just 4 percent of GSA’s overall lease inventory had current annual rents above the $2.85 million threshold as of August 2015, they account for 44 percent of GSA’s total net annual leasing costs. According to both GSA regional officials and representatives of GSA tenant agencies, factors similar to those that played a role in causing higher rates for GSA leases can also cause costs to exceed GSA’s initial estimates: Officials from GSA regional offices stated that a lack of competition for GSA leases can directly contribute to variations between initial cost estimates and actual leasing costs. As stated earlier, limited numbers of qualifying properties and the reluctance of some building owners to bid for GSA leases can limit competition. When fewer offers are made by owners of appropriate space, GSA regional officials said that initial cost estimates are less likely to align with actual leasing costs. For example, officials from one GSA region told us that the time when offers come in from the market after the solicitation is when rental rates fluctuate the most—and is when the ultimate rental rate could fall on either end of the range set forth in the initial estimates. The lengthy GSA leasing process also leaves GSA and federal agencies vulnerable to increased leasing costs. As stated earlier, the 11 GSA leases we reviewed took almost 4 years from the point the lease acquisition process started to completion on average, with some standard leases taking from more than 3 years to nearly 6 years for GSA to fulfill all agency requirements to ready the space for occupancy. The longer this process lasts, the more local lease markets and inventory can change, reducing the reliability of initial estimates. Officials from one GSA tenant agency said that the leasing process can be made longer in some cases because GSA is currently limited to seeking existing space on behalf of agencies. These officials suggested that also allowing the consideration of new construction could increase competition for GSA leases and potentially reduce the need to re-advertise lease requests on agencies’ behalf. Changes to agencies’ space needs during the GSA leasing process can also negatively affect the accuracy of initial cost estimates and hinder GSA’s negotiating power. According to GSA’s leasing guide, each tenant agency is obligated to plan and budget for its upcoming space needs 18 to 36 months before the space is needed, further lengthening the period of time over which agencies’ needs can change. For example, officials from a GSA tenant agency said that their staffing levels and rental requirements can fluctuate during the typical 24 to 36 months of GSA’s lease acquisition process and that such changes can increase space requested and increase costs. GSA also uses federal agencies’ initial leasing specifications to estimate tenant improvement costs, but the actual prices are negotiated after the lease is awarded. As a result, GSA regional officials said that unanticipated, tenant-driven changes can add time and increased costs to a project even after a lease is awarded. Officials from that tenant agency said that GSA’s leasing cost estimates prove to be more accurate for projects that do not experience any significant delays or changes in scope. Officials from one GSA tenant agency told us that when leasing costs exceed GSA’s initial estimates, it creates budgeting challenges—not only because it can force them to identify additional sources of funding when costs are greater than initially anticipated, but also because there is not a reliable pattern in terms of variance between GSA’s estimates and their actual leasing costs. However, as stated earlier, tenants contribute to some of the causes that result in actual leasing costs exceeding GSA’s initial estimates. Leasing has comprised a growing share of GSA’s portfolio over the last 15 years and will likely continue to remain a significant part of the federal property management system. While it is preferable to own property if an agency has stable rather than variable needs requiring flexibility, it is also important to reduce leasing costs. GSA’s goal is to lease space at or below local market rates, but we found that GSA paid rates 10 percent or more above market rates in the first year about half of the time from 2008 through 2014. GSA has taken steps to reform leasing—and those efforts continue. As part of its ongoing leasing reform efforts, GSA hopes to reduce its lease rates by increasing the length of the terms of its leases, but increasing the competition for GSA leases may yield more benefits. GSA regional officials who negotiate the agency’s leases said that increasing competition for GSA leases is key for achieving market rates, but circumstances often combine to drive down competition. Factors such as a tenant agency’s need for space in restricted geographic areas and specialized building requirements can limit the number of properties that qualify. Reducing barriers to competition, where possible, could increase the number of property owners bidding on GSA leases and, in turn, contribute to GSA more consistently obtaining lease rates at or below market rates. GSA could also reduce the costs federal tenants pay for their leases. GSA tenants often pay high interest charges—up to 9 percent, among leases we reviewed—to finance their finishes, known as tenant improvements. Identifying sources of capital to allow tenants to fund tenant improvements at the outset of their leases could reduce these specific costs related to leasing by a third—saving millions of dollars for some leases. The Federal Buildings Fund (FBF) has unobligated balances that, with sufficient controls, could help fund tenant improvements. However, further study of the risks and opportunities are needed before GSA seeks additional budget authority from Congress to use FBF balances in this way. GSA could also reduce leasing costs for its federal tenants by allowing them to the option of committing to stay in their space for the full term of their agreement in exchange for a lower fee, particularly for short leases. Currently, GSA requires most tenants to sign cancelable occupancy agreements. However, it takes tenants years to plan and budget for a move, and more than two-thirds of the GSA leases we reviewed have firm terms of 5 years or less, reducing the likelihood of their canceling occupancy agreements early. As part of its lease reform efforts and to increase possible cost savings, we recommend that the GSA Administrator take the following steps: 1. Fully explore strategies to enhance competition for GSA leases by encouraging tenant agencies to broaden their allowable geographic areas and to limit their specialized building requirements to those justifiably unique to the federal government. 2. Seek to reduce leasing costs for federal agencies by: Exploring, with relevant stakeholders, the possibility of loaning unobligated Federal Buildings Fund balances to agencies to cover tenant improvement costs that would otherwise have to be financed for new leases. If GSA finds that, with sufficient controls in place, tenant improvements can be safely funded this way, it should participate in the development of a legislative proposal to request that Congress make the necessary budget authority available. Allowing tenant agencies the option of choosing non-cancelable occupancy agreements with lower administrative costs, particularly for leases with firm terms of 5 years or less. We provided a draft of this report to GSA for review and comment. We also provided a draft of this report to two GSA tenant agencies that we spoke with during our review: the Social Security Administration (SSA) and the Department of Justice (DOJ). GSA provided written comments that are reprinted in appendix II. SSA provided a letter, reprinted in appendix III, stating that it had no comments on our report. DOJ provided technical clarifications, which we incorporated where appropriate. GSA agreed with the recommendation to fully explore strategies to enhance competition for leases by encouraging tenant agencies to broaden their allowable geographic areas and limit their specialized building requirements to those justifiably unique to the federal government. GSA stated that it has been working aggressively to maximize competition in its leasing program and is increasing national oversight and support for lease planning, project management, and procurement activities. With regards to our recommendation to seek to reduce tenants’ interest costs by exploring the possibility of loaning unobligated FBF balances to agencies to cover tenant improvement costs that would otherwise have to be financed for new leases, GSA agreed to evaluate its existing authorities to determine if the FBF could be used to fund tenant improvements. GSA did not agree with the recommendation to allow tenant agencies the option of reducing administrative fees by choosing non-cancelable occupancy agreements, particularly for leases with firm terms of 5 years or less. GSA stated in its letter that it is responsible for assigning and reassigning space to support agency space requirements, balancing risk and flexibility to manage the leased portfolio. GSA’s letter also noted that having the flexibility to return underutilized space is an important tool in meeting the administration’s objective to consolidate space and reduce the federal real property footprint. We agree that agencies should consolidate space as appropriate and contribute to meeting this objective. However, as our review and analysis show, this flexibility costs tenant agencies an additional two percent of the value of the lease, although tenant agencies rarely use it. Even if a tenant agency exercises its right to return space to GSA before the end of a lease, it may be particularly difficult to realize savings for leases with firm terms of 5 years or less; GSA’s 3 year process to plan and execute leases severely limits the extent to which tenants could cancel leases of 5 years or less and any savings would likely be offset by the higher GSA fees these agreements require. Further, in such situations, tenant agencies are given no say in a decision that requires them to pay additional fees. Ultimately, we believe tenant agencies are in the best position to decide how to meet their consolidation objectives. Thus, if agency officials determine that their agency may benefit from the flexibility to cancel all or part of an agreement before the end of the lease, they should be able to choose to pay for that flexibility, rather than being required to do so. We are sending copies to the appropriate congressional committee and the Acting Administrator of GSA. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or wised@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to assess: (1) the extent to which the General Services Administration (GSA) achieves market leasing rates for its leases and how overall federal leasing costs could be reduced, and (2) how GSA’s cost estimates for selected leases compared with the actual costs of leasing paid by federal tenants. To determine the extent to which GSA achieves market leasing rates for its leases, GAO contracted with a professional services real estate firm, selected through a competitive process based on previous experience and cost, to analyze data on GSA and private sector real property leases. The scope of data considered for this objective included information on both private sector and GSA leases commencing between calendar years 2008 through 2014, which were compared and analyzed to determine any differential in lease payments attributable to specific differences in the leases, based on their financial performance. GSA provided GAO with a data set that included all of its active leases from its multiple data sources, which was aggregated using GSA’s unique lease identifiers. The data provided by GSA were reviewed by the contractor for inconsistencies and discussed during interviews with knowledgeable agency officials to assess the appropriateness of their use. GAO also reviewed the contractor’s report, including the steps taken to assess the reliability of the GSA data used and the types of analysis conducted, and found them to be sufficiently reliable. Incomplete and inconsistent data, as well as outliers, were excluded, though accuracy of the individual lease data points provided by GSA was not independently verified. The analysis conducted was consistent with how GSA currently handles leases, uninfluenced by agency practices and policies no longer in effect. For example, revisions to processes from the National Broker Contract, Advanced Acquisition leases, and industry outreach sessions have affected GSA practices, thus impacting how leases were negotiated and executed. Per the data provided by GSA, there were 8,499 GSA leases in effect as of December 31, 2014. Of this population, 5,160 leases commenced after January 1, 2008, and were initially considered as part of the analysis period. We considered only GSA operating leases—not prospectus level (“high value”)—as well as how the data were dispersed among regions, how many leases are full service versus not full service, how many leases were added to the region per year, and to identify outliers; the result was 4,285 remaining leases matching the following criteria for analysis: classified as 100 percent office space, full service rents, and executed within the dates of analysis (2008 to 2014). These 4,285 leases comprised the universe of leases upon which all analysis was performed. These leases were then broken down into subsets for specific analysis. The pool of leases analyzed for each of the clauses and actions varies based on the compliance of those leases to the criteria for that pool. There was symmetry in the dispersion, again indicating a valid sampling of leases. Misclassified properties were identified within this data set, and outliers were not analyzed or included in the subsequent data sets. The rental rates per square foot analyzed were all-inclusive. The collection of private sector lease data for markets nationwide for each year of the analytical period included the collection and consideration of: Quarterly and annual published market reports for 151 markets and submarkets nationwide, collected from 58 national and regional brokerage companies and used to summarize market rent based on actual leases and listings in each market and submarket. Fifteen private sector office leases for national companies, often publically traded and viewed as credit-worthy tenants by brokers and landlords who were parties to the lease. Office building operating expenses with detailed line item breakdown for 90 commercial business district and suburban submarkets from the Institute of Real Estate Management (IREM), for urban and suburban buildings of 40,000 to 99,999 square feet and 100,000 to 250,000 square feet. Typical lease structures, including base term and options with corresponding tenant improvement allowances and commission structures compiled from broker interviews. This private sector lease information provided the foundational knowledge of the quantified and qualified performance of private sector office markets in each market over the analysis period. To assess the appropriateness of the private sector data, the contractor reviewed brokerage reports from several sources for each market and conducted interviews with 35 private sector real estate brokers nationwide using a semi-structured interview protocol, which covered topics such as typical years of a lease, market practices, and responsibility for various expense and service fees. In addition, GAO reviewed the source of the broker market reports and clarified analytical steps with the contractor, including comparability of tenants between the private sector leases and GSA leases. As a result of these steps, the private sector lease data were found to be sufficiently reliable. Grouping of the private sector lease data occurred in several analyses: City and submarket analysis to identify a single market rental rate for Class A and for Class B office rents for each year of the analysis for comparison against GSA base year rates. Matching of private sector markets within the Top 50 Metropolitan Statistical Areas (MSAs), as defined by the Office of Management and Budget, and markets covered by IREM to identify the portion of rent typically for rent, operating expenses and tenant improvement costs. Identification of IREM operating expenses for the same year as the GSA lease base year for comparison between GSA negotiated operating expenses and market operating expenses. Identification if GSA leases were equivalent to market, above or below market in each region and each year. For this analysis, the contracted firm compared the base year contract rent for a GSA lease to the market rent cited from brokerage reports as of the date of lease commencement. The number of leases incorporated into each of the data points for GSA leases was a weighted average of the actual rental rates for each lease signed in that year. There were four parts of the data GSA provided that, when added together, resulted in the rental rate paid by GSA: shell rent, general tenant improvements, custom tenant improvements, and operating expenses. The analysis for comparative performance of GSA leases and private sector leases involved: abstracting GSA leases based on GSA data, disaggregated to differentiate between shell rental rate, general tenant improvements, custom tenant improvements, and operating expenses; considering the net present value of the GSA leases, using the total cost of the lease and payments each year, discounted back at the Office of Management and Budget (OMB) discount rate of 2.80 percent (nominal 10 year rate); considering nominal total cost of lease for both GSA and private sector leases; and plotting GSA lease and private sector lease performance in each market (such as by size, location, year executed) to identify any variations based on lease terms. From the full data set, a sample of 714 GSA leases was selected from the MSAs from which we had an abundance of market and operating information, across all GSA regions, to be compared with data from published brokerage reports in specific markets and submarkets to determine the extent to which GSA received a comparable private-sector market rent for its leases. These leases were selected based on the locations within the counties identified within each MSA. The match to MSA was to capture the significant published market data available to benchmark. The specific submarket and building class of each lease were matched from the brokerage report, and the difference in rent was calculated between GSA contract rent and the brokerage report market rent. The market rent for each MSA was researched at the city and the submarket level and then further refined for Class A and Class B properties. Matched pairs of GSA leases were also analyzed for cases in which all factors of leases were highly comparable, except for the length of the term. The characteristics by which leases were matched include the commencement year of the lease, the submarket, building class, and square footage. This analysis involved the isolation of specific lease terms, such as length of term or period of amortization of tenant improvement costs, to determine differentials from the net effective rent, then capturing any variation in the isolated terms. By isolating the single factor and term, differences attributable to this factor would be apparent as all other factors were the same. There were 96 leases matched in groups of two and three for locations nationwide. In the matching analysis, termination rights for both 5- and 10-year leases were in place and rental rate was broken out by shell rent, general tenant improvement costs, and custom tenant improvement costs. The first portion of this analysis was based on the base year contract rent against base year contract rent for each pair. As the leases were in the same submarket and of the same size, the premise was that the two properties were competitive and comparable to each other and that the only differentiator was the term of the lease. The second portion addressed the net present value over the full term of the lease, assuming that termination rights were not exercised and that the leases were intact for the complete term. This analysis enabled the reduction in rent after the firm term to be recognized for those leases with amortized tenant improvements. Further analysis was performed where the firm 10-year term was isolated and compared against a 10-year lease with a 5-year termination right (i.e., a 5-year firm term with a subsequent 5-year soft term). The data was sorted by lease year, then square footage, region and shell rent. The leases were viewed to analyze the difference in net present value (NPV) when the 10-year firm and 10-year with soft term leases were compared, and also when the two leases begin with approximately the same dollars in shell rent or have equivalent shell rent, tenant improvement costs, and operating expenses. A breakdown was created for the leases by region for the number of firm five/soft five versus firm 10-year term for the 10- year leases. The analysis reviewed the entire data set for termination rights and extracted leases with 3-, 5-, and 10-year terms to compare total costs and the discounted net present value of total costs. Further analyses were performed where the firm 10-year term was isolated and compared against a 10-year lease with a 5-year termination right. The leases were viewed to analyze the difference in NPV when 10-year firm leases and 10-year lease involving a soft term were compared and when two leases begin with approximately the same dollars in shell rent or have equivalent shell rent, tenant improvement costs, and operating expenses. To determine how GSA’s cost estimates compare with the actual costs of leasing paid by federal tenants, we analyzed GSA’s leasing process, reviewed lease documentation, and interviewed key GSA staff for selected leases. We also conducted interviews with officials from all 11 GSA regional offices, as well as the two tenant agencies with the most leases in our sample—the Department of Justice and Social Security Administration—regarding their experiences with the GSA leasing process. In addition, we reviewed GSA leasing policies and guidance and interviewed officials from GSA headquarters about the lease procurement process. Further, we analyzed documentation and estimated the actual costs over the active terms of 11 selected GSA leases—one from each of GSA’s regions. To begin, we reviewed GSA’s publicly available inventory of leased property to understand what data were easily accessible and to determine which criteria were available for consideration. After downloading the October 2014 version of this GSA data file on November 20, 2014, we reviewed related documentation and interviewed relevant GSA officials to assess the appropriateness of this data and found them to be sufficiently reliable. We then identified criteria for selection of leases for inclusion, ensuring that we had a range of leases with different characteristics from the overall lease population of 8,510 leases that would serve as a useful sample of GSA leased properties for this engagement. These initial criteria were: type of property identified as 100 percent office space; latest lease action identified as “New” or “New/Replacing”; current annual rent of either (a) equal or greater than the 2014 Prospectus threshold of $2.85 million (“high value” leases) or (b) greater than $500,000 and less than $2.85 million (“standard leases”); lease effective date of either (a) between January 1, 2000 and December 31, 2005 or (b) between January 1, 2010 and October 31, 2014; GSA regional office overseeing lease; and current annual rent divided by rentable square footage. After identifying the criteria for selection of leases for analysis, we used the criteria to categorize the GSA data into four sets: high-value leases with lease effective date between January 1, 2000 and December 31, 2005 and current annual rent equal to or greater than $2.85 million; standard leases with lease effective date between January 1, 2010 and October 31, 2014 and current annual rent greater than $500,000 and less than $2.85 million; high-value leases with lease effective date between January 1, 2000 and December 31, 2005 and current annual rent equal to or greater than $2.85 million; and standard leases with lease effective date between January 1, 2010 and October 31, 2014 and current annual rent greater than $500,000 and less than $2.85 million. For each GSA region in each of these four data sets, we then identified the GSA lease with the highest cost per square foot for potential further cost analysis. Using information requested and received from GSA on the 40 remaining leases, we identified the tenant agencies for which GSA has leased the properties. To finalize the list of 11 GSA leases—one from each GSA region—to be analyzed, we took steps to apply additional criteria to be applied to the list of 40 leases. This step was taken to ensure (1) that the final list included at least 5 high-value leases and at least 5 standard leases, (2) that the leases ultimately selected were not concentrated in only a few federal agencies or departments, and (3) that the final list was broadly representative of GSA leases in general. In addition, we considered the inclusion of leases with and without termination rights to ensure that leases with the active possibility of the invocation of these rights were included. The final list of 11 GSA leases analyzed—which are spread among 6 federal agencies— is shown in table 1. After identifying the 11 leases to be analyzed, we requested relevant documentation on each of them from GSA, which we reviewed to develop specific questions for regional GSA officials knowledgeable about the history and details of each individual lease. In addition to developing questions for regional GSA officials on each of the 11 leases after receiving lease documentation from GSA, we determined which lease cost milestones were appropriate points of comparison to analyze estimates over the GSA leasing process. To determine how GSA’s lease cost estimates change over time, we interviewed GSA officials from all 11 regions in February and March 2015 to determine when in the lease procurement process cost estimates are developed and when cost estimates are provided to the tenant agency for review. In these discussions, GSA regional representatives identified three milestone points in the leasing process when written estimates of a particular lease’s costs are provided to the tenant agency. These milestones are (1) the end of requirements development phase, when tenant agencies authorize GSA to move forward with lease procurement; (2) at lease signing, when the tenant agency agrees to the terms of the lease; and (3) when the tenant agency takes occupancy of the leased space and when GSA accepts the space and rent payments begin. GSA regional representatives told us that an occupancy agreement detailing estimated lease costs and lease terms is shared with a tenant agency at each of these points, and we contacted officials from each of GSA’s 11 regions to request copies of all of the occupancy agreements associated with each of the 11 leases being analyzed. The GAO team reviewed these agreements to determine which were associated with each of the three milestone points and, when the documentation did not include all three of the milestones identified as key points of the process, we also utilized information in supplementary lease documentation to identify cost estimates made at the three milestone points and associate each with the related agreement. For example, regional GSA officials were unable to provide copies of agreements from the initial requirements development phase for 5 of the 11 leases. For these cases, we identified the estimates included in either a lease’s acquisition plan or the prospectus documentation for high value leases as alternative cost estimates or milestones. For leases with multiple occupancy agreements associated with a single GSA lease, we considered the circumstances based on conversations with regional officials. In one case, we examined the costs estimates based on weighted average costs, on a rent per square foot basis, for the entire lease. In another case, we ensured that we did not include cost estimates that included additional space that was not considered in the crafting of initial cost estimates because GSA could not reasonably have anticipated the tenant agency’s desire for additional space or the availability of adjacent space in the market. To analyze federal agencies’ leasing costs over time, we compared the terms and costs for the 11 selected leases across the three milestone points: the initial cost estimate, the cost estimate at the time of lease execution, and the final cost when the agency takes occupancy of the space. GSA’s occupancy agreements with tenant agencies identify a “charge basis” on each agreement, which is the total rentable square feet associated with the agreement and, because this number is the square footage metric that GSA presents to its tenant agencies, we used this as well. In order to compare leases with different terms, the rent per rentable square foot for the first stabilized year of occupancy for each lease was calculated. This calculation allows a lease estimate that began in January to be compared to a lease estimate that began in June; both were compared on a 12-month, annualized, basis. The line items that are included in our review of cost estimates included shell rent, amortized tenant improvement costs—both general and custom—and operating costs. Real estate tax costs were also considered, if applicable. The total amount of tenant improvement costs paid by tenant agencies was calculated using the data on these costs and the interest rate information provided by GSA regional officials. For the purposes of this analysis, GAO has considered the lease costs that are paid by GSA to the private sector landlord monthly during the lease. Although lease payments are due monthly, the stated rent per square foot is the total annual cost of the monthly payments. The tenant in the lease pays these costs as a pass- through, as well as a management fee to GSA. However, the management fee is not considered in our analysis of GSA’s cost estimates because it is determined by GSA policy. We conducted this performance audit from August 2014 to January 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Keith Cunningham, Assistant Director; Chad Williams, Analyst-in-Charge; Alex Lawrence; Mary Pitts; Amy Rosewarne; Crystal Wesco; Terence Lam; Josh Ormond; and Elizabeth Wood made key contributions to this report.
More than half of GSA's 377 million square feet of space were leased from the private sector as of 2014. While GSA strives to match or beat private sector leasing rates, it is important to identify any opportunities to increase efficiency and reduce costs. GAO was asked to review GSA's leasing costs. This report examines (1) the extent to which GSA's leases achieve market rates and how overall federal leasing costs could be reduced and (2) how GSA's cost estimates for selected leases compared with the actual costs of leasing paid by federal tenants. GAO determined the extent to which the rates of a sample of 714 GSA leases compared to market rates; analyzed selected leases for office space across all 11 GSA regions in more detail; and interviewed officials from all GSA regions and 2 GSA tenant agencies, as well as private sector real estate representatives. GAO contracted with a real estate consultant for the market rate analysis. GAO found that the General Services Administration's (GSA) lease rates exceeded comparable market rates for many of 714 leases reviewed. Specifically, a review of these leases from 2008 through 2014 determined that about half exceeded their local market's average private sector rate for similar space by 10 percent or more. According to officials from all 11 GSA regions and private sector stakeholders, GSA is unable to more consistently achieve lower rates because competition among private lessors for these leases is limited; this limited competition is due to factors including tenant agencies' requested geographic areas and specialized building requirements, as well as the length of GSA's leasing process. For example, an agency's initial requested geographic area may be so restricted that it does not include any buildings that meet all tenant requirements, resulting in increased costs and time as GSA explores alternatives. In addition, overall federal leasing costs increase when tenants finance needed improvements to newly leased space—called tenant improvements—over time. GSA tenants routinely amortize these costs over the term of their leases and pay interest rates of up to 9 percent to the building's owner. Because GSA's tenants lack sufficient upfront capital, they chose to amortize their tenant improvements for all nine of the leases GAO studied that included those costs. In total, these 9 leases will incur $15 million in interest fees to be paid to private owners—nearly 40 percent of the total paid for these tenant improvement costs. GSA manages a fund—the Federal Buildings Fund, which pays rent and other real property costs—with sufficient unobligated balances to loan tenants enough funds to cover tenant improvement costs and avoid paying private sector interest, but GSA does not have budget authority to fund such costs. GSA also requires most of its tenants to sign cancelable occupancy agreements, which permit tenants to vacate leased space under certain circumstances in exchange for a higher fee paid to account for the risk of GSA's possibly having to find a new tenant for the space. However, the importance of routinely including this built-in flexibility for short term leases is questionable, as it is not often exercised. Allowing tenants the option of choosing non-cancelable agreements would reduce tenant fees. The actual leasing costs paid by tenant agencies exceeded GSA's estimates for 7 of the 11 leases finalized from 2000 to 2014 that GAO reviewed in more detail. Seven of those leases were “standard” leases (costing less than $2.85 million in annual rent, as of fiscal year 2014) and four were “high value” (costing more than $2.85 million). For 4 of the 7 standard leases tenants' actual leasing costs exceeded GSA's estimates by more than 10 percent. Inaccurate estimates complicate tenant agencies' planning, but tenant agencies often have to accept increases in GSA's cost estimates because some lack authority to independently lease space. GSA officials said that the lack of competition for GSA leases and changes to tenant agencies' space needs during the leasing process contribute to cost growth. Conversely, GAO found GSA's initial cost estimates for 4 high-value leases to be more accurate than those for standard leases. High value leases, which represent only 4 percent of leases but more than 40 percent of GSA's leasing costs, are subject to congressional authorization, which may help control cost growth. GSA should (1) enhance competition by encouraging tenant agencies to modify their geographic and building requirements; (2) explore seeking authority to use Federal Building Fund balances to reduce interest fees; and (3) give tenants the option to reduce fees by choosing non-cancelable occupancy agreements. GSA agreed to increase competition and determine if it can use Fund balances to pay tenant improvement costs but disagreed with allowing tenant agencies to choose non-cancelable occupancy agreements. GAO believes GSA should provide this option as a potential cost-saving measure.
The nation’s transportation system is a vast, interconnected network of diverse modes. Key modes of transportation include aviation, freight rail, highway, maritime, transit, and pipeline. The nation’s public transit system includes multiple-occupancy vehicle services designed to provide regular and continuing general or special transportation to the public, such as transit buses, light rail, commuter rail, subways, and waterborne passenger ferries. According to APTA, buses are the most widely used form of transit, providing almost two-thirds of all passenger trips. Light rail systems are typically characterized by lightweight passenger rail cars that operate on track that is not separated from vehicular traffic. Commuter rail systems typically operate on railroad tracks and provide regional service (e.g., between a city and adjacent suburbs). Subway systems, like the Metropolitan Transportation Authority’s New York City Transit, typically operate on fixed heavy lines within a metropolitan area and have the capacity for a heavy volume of traffic. Waterborne passenger ferries provide a link across many of the nation’s waterways and, in some cases, present drivers with an alternative travel option. Public transit systems in the United States are typically owned and operated by public sector entities, such as state and regional transportation authorities. In addition, while some transit agencies rely on their local police department to secure their systems, others, such as the Bay Area Rapid Transit system in San Francisco, have established their own dedicated police department. Mass transit and passenger rail systems carry a high number of passengers every day and are open and fully accessible. Multiple stops and transfers lead to high passenger turnover, which is difficult to monitor effectively, and a terrorist attack on public transit systems could result in a large number of casualties. While there have been no successful terrorist attacks against U.S. public transit systems to date, terrorist attacks on public transit systems around the world, such as the March 2010 subway bombings in Moscow, Russia, and the recent plot to detonate explosives on the New York City subway system, illustrate the potential threat to public transit systems. Securing the nation’s public transit systems is a shared responsibility requiring coordinated action on the part of federal, state, and local governments; the private sector; and passengers who ride these systems. A component of this shared responsibility is ensuring that those within the private and public sector have access to quality security-related information to enhance prevention and protection efforts. DHS is the lead department involved in securing the nation’s homeland. As required by the Homeland Security Act of 2002, the department is responsible for coordinating homeland security efforts across all levels of government and throughout the nation, including with federal, state, tribal, local, and private sector homeland security stakeholders. The Aviation and Transportation Security Act established TSA as the federal agency with primary responsibility for securing the nation’s transportation systems. As part of this responsibility, TSA serves as the lead DHS component responsible for assessing intelligence and other information to identify individuals who pose a threat specifically to transportation security and to coordinate countermeasures with other federal agencies to address such threats. TSA is also charged with serving as the sector-specific agency for the transportation community. Within TSA, several offices, including the Office of Transportation Sector Network Management and the Office of Intelligence, play a role in sharing security-related information with transportation stakeholders. In addition to TSA, a number of other entities are responsible for sharing security- related information with internal and external stakeholders, including public transit agencies. Table 1 below provides details on roles and responsibilities of some of the various entities involved in sharing security- related information with public transit agencies. According to APTA and TSA officials, the PT-ISAC and the public transit subportal on DHS’s HSIN (HSIN-PT) were designed to serve as the primary mechanisms for sharing security-related information with public transit agencies. The PT-ISAC, which is implemented by APTA under a cooperative agreement with FTA, was designed to serve as the one stop shop for public transit agencies seeking to obtain security-related information. The PT-ISAC collects, analyzes, and distributes security and threat information from the federal government and open sources on a 24/7 basis. It provides public transit agencies with unclassified and open- source documents obtained from numerous sources, including DOT, DHS, and DOJ. According to PT-ISAC officials, this mechanism disseminates this information through daily E-mails with attachments summarizing and analyzing recent security and cybersecurity information, news, threats, and vulnerabilities within the transportation sector. In addition, the PT- ISAC has a searchable library of government and private security documents, and PT-ISAC analysts hold top secret security clearances. HSIN-PT is also focused on providing security-related information pertaining to the public transit industry. According to DHS officials, HSIN was designed to serve as the department’s primary information-sharing mechanism for the larger homeland security community engaged in preventing, protecting from, responding to, and recovering from all threats, hazards, and incidents under DHS jurisdiction. HSIN is comprised of a network of communities, referred to as communities of interest, such as Intelligence and Analysis, Law Enforcement, Emergency Management, and Critical Sectors (CS). Within HSIN-CS, each of the 18 critical sectors maintains its own site. Under the transportation sector, the public transit mode maintains its own subportal on HSIN. According to TSA officials, HSIN-PT is maintained and populated by mass transit and passenger rail private and government stakeholders. HSIN, including its public transit subportal, is accessible via the Internet, but users must first be vetted against established criteria to obtain a user name and password from DHS to access the network and retrieve information. As an additional feature, HSIN users may elect to receive E-mail alerts that include notices of ongoing events or direct the user to a particular location within HSIN to obtain additional information. While the PT-ISAC and HSIN-PT are focused on providing security-related information to public transit agencies, the agencies we surveyed did not rely solely on these two mechanisms for their information needs. Figure 1 below illustrates the 12 key information-sharing mechanisms, identified by the agencies we surveyed, that disseminate security-related information to public transit agencies. These mechanisms were cited as sources of security-related information by more than 40 percent of the public transit agencies we surveyed. Forces (JTTF) Transit Security and Safety Roundtables The information-sharing mechanisms described in figure 1 vary by intended users of the mechanism, the type and source of information offered, and how the information is distributed. Table 2 provides additional details on the 12 information-sharing mechanisms public transit agencies cited most frequently as sources for security-related information. Although all of these mechanisms are used by some segment of the public transit agencies we surveyed to obtain security-related information, access to the information disseminated through the mechanisms illustrated in table 2 may vary by, among other factors, whether the transit agencies have a dedicated police department, the size of transit agency, and accessibility of the information. For example, some public transit agencies with a dedicated police department receive security-related information through their law enforcement representative on the local JTTF. According to FBI officials, public transit agencies that do not have a dedicated police department are less likely to receive information from the JTTF. In addition, the Transit Security and Safety Roundtables are specifically tailored for the nation’s largest mass transit and passenger rail agencies, typically those ranked within the top 50 or 60 by ridership. Smaller transit agencies are less likely to receive information disseminated through this mechanism since they are typically not invited to participate in these roundtables. Also, of the mechanisms identified by the public transit agencies we interviewed and surveyed, all but one send information directly to transit agencies instead of requiring users to log on to a system to retrieve information (“push” vs. “pull”). In addition to the information-sharing mechanisms identified in table 2, TSA-OI implemented its TS-ISAC in March 2010 as another means for sharing security-related information with the transportation industry, including public transit agencies. Specifically, TSA’s vision for the TS- ISAC is to serve as the one stop shop to obtain TSA-OI reports and documentation, such as SBU intelligence products and other documents from other transportation security partners and stakeholders. The TS- ISAC aims to enhance collaboration between operators, law enforcement personnel, and security directors from all transportation modes. Similar to HSIN-PT, the TS-ISAC is a subportal of HSIN-CS, and therefore users must have a HSIN password to access it. Once access is obtained, TS- ISAC users can set up alerts to be notified when a new document has been posted to the site. Our survey results indicate that public transit agencies’ satisfaction with the security-related information they received varied with the type of transportation service provided and whether the agency was large or midsized. As highlighted in table 3 below, three-fourths of public transit agencies that responded to this question in our survey (57 of 76) were generally satisfied with the security-related information they received, while less than one-sixth (11 of 76) were generally dissatisfied. The agencies that provide heavy rail, light rail, or commuter rail service (rail agencies) were generally more satisfied with the information they received than the agencies that provide bus or ferry service, but not rail service (non-rail agencies). Specifically, most rail agencies (30 of 36) were generally satisfied with the security-related information they received, as opposed to approximately two-thirds (27 of 40) of non-rail agencies. In addition, the larger agencies we surveyed were generally more satisfied with security-related information-sharing than the midsized agencies. Specifically, nearly all of the large agencies that responded to the survey (14 of 15) were generally satisfied with the security-related information they received, and nearly half (7 of 15) were “very satisfied.” By contrast, 43 of 61 midsized agencies were generally satisfied with the information they received, and less than one-sixth (10 of 61) were “very satisfied.” Table 3 illustrates public transit agencies’ overall satisfaction with the security-related information they received. The agencies we surveyed reported using several different mechanisms to receive security-related information, and in general they were satisfied with the information they received through these mechanisms. Of the mechanisms included in the survey, 12 were used by or accessible to at least 40 percent of the agencies that responded to the survey. The two mechanisms most often cited were E-mail alerts from FTA officials (65 of 76) and E-mail alerts from TSA officials (56 of 76); overall general satisfaction with these two mechanisms was 86 percent and 74 percent, respectively. Transit Security and Safety Roundtables were the highest- rated mechanism for overall general satisfaction, with 33 of 36 agencies generally satisfied. With respect to information relevance, validity, and timeliness—three of the six dimensions of quality we included in the survey—regional emergency operations centers received the highest general satisfaction ratings. For actionable information, respondents rated the information they received from other public transportation systems the highest for general satisfaction (28 of 33). Among the 12 most frequently cited mechanisms, public transit agencies were the least satisfied with HSIN, both in terms of overall general satisfaction (19 of 33) and for each of the six dimensions of quality. Public transit agencies in our survey viewed the PT-ISAC more favorably than HSIN; approximately three-fourths (37 of 49) of PT-ISAC users indicated they were generally satisfied with the security-related information they received from this mechanism. See appendix III for additional data on public transit agencies’ satisfaction with individual information-sharing mechanisms. Public transit agencies also expressed their views on the “cross-sector” information they receive. Most agencies that responded to our survey indicated that receiving cross-sector information is important or very important (63 of 78), and this view was shared by both rail and non-rail agencies. However, these two groups characterized differently the amount of cross-sector information they received. Specifically, approximately half of responding rail agencies indicated that they received “about the right amount” of cross-sector information (18 of 37). The remaining rail agencies either wanted to receive additional cross-sector information (7 of 37) or felt that they already received too much (10 of 37). Conversely, about half of non-rail agencies (22 of 41) reported receiving “too little” or “far too little” cross-sector information. Rail and non-rail agencies also differed with respect to their satisfaction with cross-sector information. Approximately two-thirds of rail agencies that responded to this question (24 of 37) were generally satisfied with cross-sector information, whereas less than half of non-rail agencies (16 of 41) were generally satisfied. See table 4 for public transit agencies’ views on cross-sector security information sharing. According to TSA’s 2007 Transportation Systems Sector-Specific Plan Mass Transit Modal Annex, a streamlined and effective system to share mass transit and passenger rail information is needed to facilitate information sharing among the federal government and public and private stakeholders. Additionally, in September 2009, we reported that multiple information systems can create redundancies that make it difficult for end users to discern what is relevant and can overwhelm users with duplicative information from multiple sources. Public transit agencies currently receive similar security-related information from a variety of sources. In addition to identifying the 12 key mechanisms most frequently used by public transit agencies to obtain security-related information, our survey also identified that nearly 80 percent of respondents (63 of 80) used 5 mechanisms or more to receive security information. Further, through interviews with public transit agencies of various sizes around the country, we identified at least 21 mechanisms through which these agencies receive security-related information. Moreover, the Mass Transit SCC/ Transit, Commuter, and Long-Distance Rail Government Coordinating Council (GCC) joint Information Sharing Working Group (SCC/GCC Information Sharing Working Group)—which is cochaired by TSA and comprised of federal and industry stakeholders and was formed to improve information sharing with public transit agencies—compiled a list that includes 59 different information products distributed to public transit agencies by 17 different sources. We identified the potential for overlap between three mechanisms that are each designed to communicate similar unclassified and SBU security- related information to public transit agencies: the PT-ISAC, the HSIN-PT subportal, and the newly-formed TS-ISAC. According to APTA, the PT- ISAC is intended to be a one stop shop for public transit agencies’ information needs. However, according to DHS, the HSIN platform is intended to serve as the agency’s primary mechanism for sharing unclassified and SBU information with homeland security stakeholders, and TSA officials stated that the agency intends for the HSIN-PT subportal to be the primary mechanism for sharing such information with public transit agencies. Moreover, the TS-ISAC—which is hosted on HSIN-CS and is intended to serve as a collaborative information-sharing platform for the public transit and other transportation modes—includes unclassified and SBU transportation-related information products produced by TSA-OI. According to TSA officials, the TS-ISAC, which services the larger transportation community, is not intended to compete with or replace HSIN-PT or the PT-ISAC, but in the future it may include a separate Web page that is specific to public transit. FTA, TSA, APTA, and public transit agency officials we interviewed expressed the desire to streamline information sharing to reduce the volume of overlapping information public transit agencies receive. For example, the then-Acting Manager of TSA's Mass Transit Division stated that the current number of sources available to public transit agencies to receive security-related information is “overwhelming.” Additionally, officials from 16 of 27 agencies we interviewed also suggested that information sharing could be improved by reducing redundancies and consolidating existing mechanisms. Our survey of public transit agencies also indicated a desire for a more streamlined approach to information sharing. In an open-ended question asking how information sharing could be improved, 24 of 80 agencies provided comments in favor of consolidating existing information-sharing mechanisms. For example, according to one respondent who favored streamlining the existing mechanisms, “there are so many purported analysis centers pushing out redundant information that an inordinate amount of my time is spent filtering these many reports to find the high-value nuggets.” Our interviews and survey data are consistent with the Administration’s March 2010 Surface Transportation Security Priority Assessment, which recommended, among other things, that TSA implement an approach for sharing transportation security information that provides all relevant threat information and improves the effectiveness of information flow. Federal and industry stakeholders have efforts under way intended to improve the efficiency of information sharing with public transit agencies and reduce the volume of overlapping information public transit agencies receive. Specifically, TSA, FTA, APTA, and other government and private sector stakeholders are participating in the SCC/GCC Information Sharing Working Group, which is reviewing how the PT-ISAC, the HSIN-PT subportal, the TS-ISAC, and other related information-sharing mechanisms (including direct E-mails from FTA and TSA officials) might be streamlined or consolidated to better serve the public transit industry. This working group is considering, among other things, whether the PT- ISAC could produce a daily (or twice daily) 2 to 3 page unclassified/For Official Use Only (FOUO) information product using open-source information as well as intelligence products from TSA, DHS, and other entities. This would mark a shift in the PT-ISAC’s activities, as it would replace a longer information product (10 to 15 pages) the PT-ISAC prepares using primarily open-source information. Working group participants are still debating how this new information product would be disseminated to the public transit industry (e.g., through direct E-mails to public transit agencies, through HSIN-PT, or both), and whether products could be archived on HSIN-PT or another system to facilitate later viewing. In addition, the working group is considering ways to scale back the number of direct E-mails public transit agencies receive, while still maintaining the capability to disseminate information in this manner when necessary. Participants in this working group have not yet agreed on a path forward to improve information sharing with public transit agencies. As of July 2010, TSA officials stated that the working group had not yet (1) drafted options for improving information sharing with public transit agencies, (2) documented the group’s current working proposal, or (3) established a time frame for completing either of these activities. Additionally, the working group has not yet determined how it will incorporate the TS-ISAC into its proposed options. While TSA, through the working group, is assessing, among other things, the extent to which information-sharing mechanisms can be streamlined, there are no time frames established for completing these efforts. Developing such time frames to guide the working group’s activities—including its assessment of opportunities to streamline existing information-sharing mechanisms that target similar user groups with similar information—could assist TSA in completing this important effort. Standards for Internal Control in the Federal Government provide that internal controls should be designed to assure that ongoing monitoring occurs in the course of normal operations. The cooperative agreement between FTA and APTA that provides funding for the PT-ISAC specifies that the ISAC perform several functions related to the HSIN-PT subportal. For example, the agreement states that the PT-ISAC is to control access to the HSIN-PT subportal, manage the information that is available on the subportal, and take steps to enhance its user-friendliness. As specified in the cooperative agreement, TSA and FTA monitor the PT- ISAC’s expenditures and activities through quarterly financial and operational reports to help ensure the PT-ISAC fulfills the se tasks. However, while TSA and FTA oversee PT-ISAC expenditures, they are not currently taking steps to ensure that the PT-ISAC performs all of the activities that are specified under the cooperative agreement. For example, the PT-ISAC does not post its analytical products (or other security-related information) to the HSIN-PT subportal, nor has it organized and archived HSIN-PT content to facilitate better access to information, as specified by the agreement. As a result, HSIN-PT is not regularly updated with security-related information, including PT-ISAC analytical products, which could be beneficial to public transit agencies. TSA, FTA, APTA, and PT-ISAC officials agree that the PT-ISAC is not performing the HSIN-related functions specified in the FTA/APTA cooperative agreement. These officials told us that through the SCC/GCC Information Sharing Working Group, they are reviewing the specific roles and responsibilities of the PT-ISAC—including activities related to the HSIN-PT subportal. However, regardless of whether the working group redefines the PT-ISAC’s roles and responsibilities, it is important to ensure that the activities specified in the cooperative agreement are carried out. Taking steps to ensure the PT-ISAC fulfills its responsibilities and completes agreed-upon tasks could help assure TSA and FTA that this mechanism meets the security information needs of public transit agencies. In March 2004, we recommended that agencies take actions to better target federal outreach efforts, and internal control standards call for management to ensure adequate means of communicating with external stakeholders who may have a significant impact on agency goals. Security officials at the public transit agencies we surveyed were not always aware of the existence of the PT-ISAC and HSIN, particularly non- rail agencies, midsized agencies, and agencies that do not have their own dedicated police department. For example, of the 80 agencies we surveyed, 23 indicated they did not receive security information from the PT-ISAC and 8 did not know whether they used this mechanism. Moreover, 15 of the 23 agencies that did not receive information from the PT-ISAC had never heard of it (see table 5). According to FTA officials, the PT-ISAC is meant to serve as a valuable resource for midsized and smaller public transit agencies. However, our survey results indicate that fewer non-rail and midsized agencies received information from the PT-ISAC than rail and large agencies (19 of 41 non- rail and 35 of 65 midsized agencies, as opposed to 30 of 39 rail agencies and 14 of 15 large agencies, respectively). Moreover, nearly all of the agencies we surveyed that had not heard of the PT-ISAC were non-rail agencies (14 of 15), midsized agencies (15 of 15), or agencies without their own dedicated police department (14 of 15). APTA conducts some PT-ISAC outreach through E-mails and newsletters to its members and other stakeholders, and FTA officials stated that they promote the PT-ISAC at Transit Security and Safety Roundtables. Both APTA and FTA officials agreed, however, on the need for additional outreach to public transit agencies to increase awareness and use of the PT-ISAC. TSA did not provide information on any existing PT-ISAC outreach efforts, but officials stated that the agency’s future actions with respect to the PT-ISAC, including outreach activities, will depend on the proposed options that arise from the SCC/GCC Information Sharing Working Group. However, as noted above, there are no time frames for this working group to draft or finalize its proposals for improving information sharing, including who will be responsible for conducting outreach activities for the PT-ISAC or what these activities will entail. Conducting targeted outreach to agencies that are not currently using the PT-ISAC—particularly non-rail agencies, midsized agencies, and agencies that do not have their own dedicated police department—could help to increase awareness and use of this mechanism. TSA and APTA officials also stated that not all public transit agencies are aware of HSIN and those that are may not view the system as a valuable resource. The results of our survey are consistent with this view and illustrate that public transit agencies’ awareness of HSIN could be increased. For example, less than half of public transit agencies (34 of 77) reported that they had log-in access to HSIN and had not lost or forgotten their log-in information (see table 6). As with PT-ISAC usage, a greater proportion of large agencies, rail agencies, and agencies that maintain their own dedicated police departments indicated they had log-in access to HSIN and had not lost or forgotten their log-in information (9 of 15 large agencies, 20 of 39 rail agencies, and 17 of 29 agencies with dedicated police departments, as opposed to 25 of 65 midsized agencies, 14 of 41 non-rail agencies, and 17 of 51 agencies without dedicated police departments, respectively). Moreover, our survey also identified that, of the 19 agencies that do not have HSIN access, 12 had never heard of the mechanism, and an additional 11 agencies did not know whether they had access to HSIN. Of the 12 agencies that had never heard of HSIN, nearly all were non-rail agencies (10 of 12), midsized agencies (12 of 12), or agencies without their own dedicated police department (12 of 12). Multiple entities have a role in conducting outreach to public transit agencies about HSIN. DHS’s Office of Operations, Coordination, and Planning is generally responsible for conducting HSIN outreach, but DHS officials from this office told us that outreach efforts for HSIN-CS, including the HSIN-PT subportal, are under the purview of DHS IP. However, DHS IP officials told us that they are deferring to APTA and TSA (the sector coordinator and sector-specific agency for mass transit, respectively), as described in the NIPP, to conduct outreach to public transit agencies on the HSIN-PT subportal. TSA has conducted some outreach to the public transit industry about HSIN by including HSIN reminders when it distributes security information via E-mail to public transit agencies. However, as table 6 illustrates, past outreach efforts have not resulted in widespread HSIN awareness and use among public transit agencies that we surveyed (particularly midsized agencies, non-rail agencies, and agencies without a dedicated police department), and our survey results suggest that access to HSIN remains a concern. TSA officials stated that the agency recognizes the need for additional outreach to increase public transit agencies’ awareness and use of the HSIN-PT subportal and added that future outreach efforts will depend on the proposed options that arise from the SCC/GCC Information Sharing Working Group. However, there are no time frames for this working group to draft or finalize its proposals for improving information sharing. Conducting targeted outreach to agencies that are not currently using HSIN—particularly non-rail agencies, midsized agencies, and agencies that do not have their own dedicated police department—could help to increase awareness and use of this mechanism. Regarding the newly-formed TS-ISAC, TSA has conducted initial outreach to increase public transit agencies’ awareness. For example, TSA distributed a TS-ISAC marketing package via E-mail to transportation stakeholders, and TSA officials stated that the agency is outreaching to other DHS components, state and local stakeholders, and other ISACs (in addition to the PT-ISAC). According to TSA data from April 2010, officials from 46 public transit agencies had been granted access to the public transit Web page of the TS-ISAC within the first 4 weeks of its operation. However, we did not collect data from public transit agencies on their awareness or use of the TS-ISAC because it was not implemented until March 2010, after we developed our survey. As a result, we could not determine the extent to which outreach efforts have increased awareness and use of the TS-ISAC in the public transit industry. Standards for Internal Control in the Federal Government call for agencies to ensure adequate means of communicating with external stakeholders that may have a significant impact on agency goals, and effective information technology management is critical to achieving useful, reliable, and continuous communication of information. However, concerns among public transit agencies about HSIN’s accessibility may reduce its value as a source of security-related information. Industry officials characterized HSIN as a “pull” system that requires users to log in and extract what is relevant to their agency. Security officials at 11 of 27 public transit agencies we interviewed told us they prefer security information to be “pushed” out to them (e.g., through E-mails, phone calls) instead of having to log into a system to retrieve it themselves. APTA officials stated that public transit security personnel do not have time to log into a “pull” system, such as HSIN, every day and sift through excess information to extract what is relevant to their agency. In addition, when a HSIN password expires (which occurs after 90 days for security reasons) users must call the HSIN help desk to obtain a new one. However, the contact information for the HSIN help desk is not located on the main HSIN log-in page, so users may not know how to get help if they experience log-in challenges. Of the 27 agencies we interviewed, 8 indicated they had experienced problems accessing HSIN. In June 2010, DHS implemented a new agency policy to identify HSIN users that have not accessed the system in 180 days and notify them via E-mail every 3 months instructing them to contact the HSIN help desk to obtain a new password. DHS officials also told us that the phone number for the HSIN help desk would be added to the HSIN log-in page, but the agency had not done so as of August 2010. In addition to accessibility concerns, certain aspects of HSIN are not user- friendly, and the security-related information available on the HSIN-PT subportal is not always valuable to public transit agencies. Of the 11 agencies we interviewed that had access to HSIN and used it to receive security-related information, 5 reported problems with using the system once they logged in. These problems included configuring E-mail alerts to notify them when information is discovered or changed in a particular area of HSIN (e.g., the HSIN-PT subportal). We experienced similar problems using these E-mail alerts. After setting up alerts to notify us when documents are discovered or changed on the HSIN-PT subportal, we received multiple notifications on a near-daily basis with links to outdated documents, such as job announcements last modified in 2007, a threat advisory for the New York City subway system last modified in 2006, and a map of power outages caused by Hurricane Wilma in 2005. Further, we found that security-related information on HSIN that could be useful to public transit agencies was not always posted to the HSIN-PT subportal. For example, in the days following the Moscow subway bombings in March 2010, certain documents pertaining to the attack were available on the HSIN-CS portal, but did not appear on HSIN-PT, despite their direct relevance to public transit agency users. The E-mail alerts we had set up for HSIN-PT did not notify us of any of this information, which included a document describing heightened security measures a large U.S. public transit agency took in response to the Moscow attack. This information could have been of interest to other public transit agencies, but HSIN-PT users would not have known about it unless they logged into the system without an E-mail prompt, navigated to the HSIN-CS portal, and found the information themselves. Based on our survey results—which indicate that only 3 of 77 agencies use HSIN daily—agencies may not have known that information pertaining to the Moscow bombings was available to them on HSIN. DHS and TSA agree that the HSIN-PT subportal is not widely used by the public transit industry and that improvements are needed. One such improvement is related to DHS’s efforts to develop a replacement system for the HSIN platform, known as HSIN Next Generation. This new system, which DHS began to develop in 2008, is intended to provide increased security and access to SBU information for public transit agencies and other user communities, including law enforcement, intelligence, immigration, and emergency and disaster management. According to DHS officials, the agency intends to move the subportals on HSIN-CS, including HSIN-PT, to the new HSIN Next Generation platform during the last quarter of calendar year 2010. Taking steps to ensure public transit agencies can access and readily use HSIN—and ensuring the HSIN-PT subportal contains security-related information that is of value to these agencies—could help DHS improve HSIN’s capacity to meet public transit agencies’ security-related information needs. DHS and TSA have established goals and output-oriented performance measures for their information-sharing activities to help gauge the effectiveness of their overall information-sharing efforts with security stakeholders. However, they have not developed performance goals and outcome-oriented measures to gauge the effectiveness of their information-sharing efforts specific to public transit agencies. Specifically, DHS and TSA have not developed such goals and measures for HSIN-PT and the PT-ISAC—mechanisms designed to serve as the primary information sources for the public transit agencies—or the recently established TS-ISAC. As a result, DHS and TSA may not be fully informed of the effectiveness of their information-sharing activities for the public transit industry. TSA officials recognize the importance of establishing specific goals and developing outcome-oriented measures, but they are in the beginning stages of doing so and could not provide time frames for when they plan to complete these efforts. Table 7, below, details DHS’s current goals and performance measures related to information sharing. The performance goals and measures established by DHS and TSA are primarily focused on information-sharing efforts with homeland security stakeholders and the transportation community as a whole, and are not specific to their efforts to share security-related information with the public transit industry. TSA has developed some output-oriented performance measures specifically for assessing its efforts to share security-related information with public transit agencies. According to TSA officials, the agency currently tracks: (1) the number of meetings held between the GCC and the Mass Transit SCC and the number of Transit Security and Safety Roundtables; (2) the number of teleconferences it conducts with the peer advisory group and the number of intelligence/information products it releases; and (3) the usage of the public transit subportal on HSIN as an indicator of stakeholders’ interest in the information provided. TSA-OI is also collecting output data to measure the performance of the TS-ISAC, such as the number of users, the length of time each user is logged-on to the site, and the number of times users access information from the Web site. We have previously reported that decision makers use performance measurement information, including output measures and information on program operations, to help identify problems in individual programs, identify causes of the problems, and modify services or processes to address problems. However, leading management practices emphasize that successful performance measurement focuses on assessing the results of individual programs and activities. We have also previously reported that without effective performance measurement, especially data on program outcomes, decision makers may have insufficient information to evaluate the cost-effectiveness of their activities. While output measures, such as those developed by TSA, are useful because they indicate the quantity of direct services a program delivers, they do not reflect the overall effectiveness of their activities. We recognize and have previously reported on the challenge of assessing the effectiveness of security-related activities such as information sharing and developing outcome-oriented measures, but have called on agencies to take steps towards establishing such measures to hold them accountable for the investments they make. Furthermore, developing such measures provides agencies with valuable information for evaluating the effectiveness of their programs and the extent to which they are meeting their goals. Furthermore, TSA has not developed specific performance goals or outcome-oriented measures for the PT-ISAC or HSIN-PT, which were both established as primary information-sharing mechanisms for public transit agencies. According to TSA and APTA officials, they plan to develop specific goals and measures for the PT-ISAC through the GCC/SCC Information Sharing Working Group. However, the working group is still finalizing its options for enhancing information-sharing efforts with public transit agencies, including assessing opportunities to streamline existing information-sharing mechanisms, and TSA officials were unable to provide us with time frames concerning the completion of these efforts. In regard to HSIN-PT, TSA has developed an output-oriented performance measure which tracks the number of users of this mechanism; however, this measure provides limited information on which the agency can assess the results and progress of this information-sharing mechanism. TSA-OI, however, has not developed specific goals or outcome-oriented performance measures for HSIN-PT. Moreover, TSA-OI officials reported that for the newly established TS-ISAC, they are focusing on providing security-related products to 100 percent of homeland security stakeholders, including public transit agencies. However, TSA has not developed goals or related performance measures for this mechanism and could not provide time frames for doing so. Once the SCC/GCC Information Sharing Working Group has developed options for improving information sharing with public transit agencies, establishing time frames for developing goals and related, outcome-oriented measures for the PT- ISAC, HSIN-PT, and TS-ISAC could assist TSA in obtaining more meaningful information from which to gauge the effectiveness of these information-sharing mechanisms. DHS and TSA have taken some steps to gather feedback on public transit agencies’ satisfaction with the security-related information they receive. For example, DHS and TSA developed forms to periodically gather feedback on security-related products from their customers, including public transit agencies. TSA officials also reported that they informally gather feedback during the Transit Security and Safety Roundtables. However, a systematic process for obtaining feedback on the usefulness of the PT-ISAC and HSIN-PT does not currently exist. We have previously reported that agencies with a systematic process for gathering feedback use surveys and other methods to identify the importance or depth of customers’ issues in a single, centralized framework, and integrate the feedback information obtained in a standard and consistent manner. In December 2009, we reported that additional DHS actions to obtain feedback on the utility and quality of information shared could strengthen the department’s efforts in this area. Research of best practices for customer satisfaction suggests that multiple approaches to customer feedback, such as focus groups and complaint programs that provide qualitative and quantitative data, and the integration of feedback data, are needed to effectively listen and understand customers’ needs and to take appropriate action to meet those needs. In March 2010, DHS I&A began attaching a survey to each of its FOUO intelligence products that are disseminated to all its customers, including state and local partners, who receive FOUO products, to better understand customer information needs. Public transit agencies that receive I&A’s FOUO intelligence products will therefore have an opportunity to provide feedback on the information provided. I&A officials stated that they plan to use these results to better inform them of product usefulness and the security information needs of their customers. In addition, TSA-OI posted a feedback form on the TS-ISAC to gather users’ views, including public transit agencies, on TSA-OI products. However, TSA-OI’s marketing materials on the TS-ISAC did not reference this feedback survey, nor has the agency informed users of this survey’s existence through any other method. In addition, according to TSA-OI officials, this survey was posted shortly after the TS-ISAC was implemented in March 2010, but as of May 27, 2010, TSA-OI had not received any feedback through this survey. Due to the recent timing of these survey efforts, it may be too early to assess the insights that will be provided through this mechanism. Although TSA officials have established a process to gather user views, including public transit agencies, on TSA-OI products, TSA has not established a systematic process to obtain public transit agencies’ feedback on information shared through the PT-ISAC and through HSIN- PT— the primary mechanisms designed to share security-related information with public transit agencies. Also, as of July 2010, TSA officials stated that they are uncertain about whether or not they will continue to use the TS-ISAC feedback form as a mechanism to gather public transit agency feedback. However, they stated that the agency does not have a systematic process in place to request, collect, and analyze feedback in order to gauge public transit agencies’ overall satisfaction with its information-sharing activities, and that such a process is needed. TSA officials could consider using various survey tools and other methods to assist them in collecting public transit agency feedback, which could better inform them of the effectiveness of their information-sharing efforts. For example, through our survey, we were able to assess the extent to which these public transit agencies used and were satisfied with a variety of information-sharing mechanisms, including TSA mechanisms. DHS’s and TSA’s efforts to share security-related information with public transit agencies could be enhanced by developing a systematic process for gathering feedback on these agencies’ satisfaction with the information they receive. The recent bombings on the Moscow subway and planned attempts to detonate explosives in the New York City subway system have highlighted the continued threat to public transit systems in foreign countries and in the United States. While the SCC/GCC Information Sharing Working Group’s efforts to enhance information sharing with public transit agencies reflects the joint stakeholder commitment to this area, opportunities for strengthening information sharing exist. Until TSA establishes time frames for the SCC/GCC Information Sharing Working Group to complete its efforts, including assessing opportunities to streamline existing information-sharing mechanisms and conducting targeted outreach efforts to increase awareness of the PT-ISAC and HSIN, the agency is limited in its ability to take further action to strengthen information sharing. In addition, without taking steps to ensure that the PT-ISAC fulfills its responsibilities and completes agreed-upon tasks, TSA and FTA cannot be assured that this mechanism meets the security information needs of public transit agencies. Further, while DHS and TSA are taking steps to improve information sharing with public transit agencies, this effort will not be complete until the accessibility and user- friendliness of HSIN are addressed. Moreover, the HSIN-PT subportal will likely continue to be underutilized until DHS takes steps to ensure that this mechanism contains security-related information that is of value to public transit agencies. Once the SCC/GCC Information Sharing Working Group develops options for improving information sharing with public transit agencies, it will be important for DHS and TSA to continue with other efforts to strengthen this area of information sharing. Specifically, until DHS establishes time frames for developing goals and related outcome-oriented performance measures for the PT-ISAC, HSIN-PT, and TS-ISAC, the department will be limited in its ability to gauge the effectiveness of its information-sharing efforts with the public transit industry. Finally, while we are encouraged by the department’s efforts to gather feedback on public transit agencies’ satisfaction with the security-related information they receive, a systematic process for obtaining such feedback on the PT-ISAC and HSIN- PT is lacking. Such a process could help DHS and TSA assess the effectiveness of their efforts to share security-related information with public transit agencies. To help strengthen information sharing with public transit agencies, we recommend that the Secretary of Homeland Security direct the Assistant Secretary for the Transportation Security Administration to take the following action in coordination with FTA and public transit agencies: Establish time frames for the SCC/GCC Information Sharing Working Group to develop options for improving information sharing to public transit agencies and complete this effort, including the Working Group’s efforts to: assess opportunities to streamline existing information-sharing mechanisms that target similar user groups with similar information to reduce overlap, where appropriate; and conduct targeted outreach efforts to increase awareness of the PT- ISAC and HSIN among agencies that are not currently using or aware of these systems. To help ensure that the PT-ISAC is meeting its objectives for sharing security-related information with public transit agencies, we recommend that the Secretaries of Homeland Security and Transportation direct the Assistant Secretary of the Transportation Security Administration and Administrator of the Federal Transit Administration to take the following action: Take steps to ensure the PT-ISAC fulfills its responsibilities and completes agreed-upon tasks. To help strengthen DHS’s efforts to share security-related information with public transit agencies, we recommend that the Secretary of Homeland Security take the following three actions: Take steps to ensure that public transit agencies can access and readily utilize HSIN and that the HSIN-PT subportal contains security-related information that is of value to public transit agencies. Once the SCC/GCC Information Sharing Working Group has developed options for improving information sharing with public transit agencies, establish time frames for developing goals and related outcome- oriented performance measures specific to the PT-ISAC, HSIN-PT, and TS-ISAC. Develop a process for systematically gathering feedback on public transit agencies’ satisfaction with the PT-ISAC and HSIN-PT. We provided a draft of this report and its accompanying e-supplement (GAO-10-896SP) to DHS, DOJ, and DOT for review and comments. We received written comments from DHS on the draft report, which are summarized below and reproduced in full in appendix IV. DHS concurred with the report and recommendations and indicated that it is taking steps to address the recommendations. DHS also provided technical comments that we incorporated where appropriate. In an E-mail received September 7, 2010, the FBI liaison stated that the Bureau had no comments on the draft report. DOT did not provide comments on the findings and recommendations but did provide technical comments to the draft report, which we have incorporated where appropriate. DHS, DOJ, and DOT did not provide comments on the e-supplement. In commenting on the draft report, DHS described the efforts the department has underway or planned to address our recommendations. These efforts are intended to improve information sharing with public transit agencies. However, although the actions DHS reported are important first steps, additional efforts are needed to help ensure that our recommendations are fully implemented, as discussed below. With regard to our first recommendation that TSA coordinate with FTA and public transit agencies to establish time frames for the SCC/GCC Information Sharing Working Group for completing efforts to develop options for improving information sharing to public transit agencies, including assessing opportunities for streamlining existing mechanisms and conducting targeted outreach, DHS stated that TSA is continuing to work with members of the working group to identify options on how to streamline the flow of information and described one such option. According to DHS, the working group has identified at least one product option for streamlining information sharing that would match the needs of stakeholders. This product would be “pushed” out to stakeholders and also be posted on appropriate websites. DHS also stated that TSA is taking steps to improve targeted outreach through collaboration of the Surface Transportation Information Sharing and Analysis Center and the PT-ISAC in the development of periodic intelligence summaries and plans to work with both ISACs, as well as DHS to ensure further outreach is conducted with stakeholders. TSA’s efforts to streamline information sharing with public transit agencies and improve its outreach are important first steps toward improving the information provided to the public transit industry. In order to meet the full intent of our recommendation, TSA should establish time frames for completing these efforts. In addition, TSA did not indicate whether it has identified other options or is considering taking additional steps to streamline existing information sharing mechanisms or how its outreach to public transit agencies will be targeted to those agencies not currently using or aware of these systems. Taking such actions would be necessary to fully address the intent of this recommendation. Regarding our second recommendation that TSA and FTA take steps to ensure the PT-ISAC fulfills its responsibilities and completes agreed-upon tasks, DHS stated that the purpose for including HSIN-PT content management and other elements currently in the cooperative agreement with APTA/PT-ISAC was to fill gaps in the information sharing process used by the mass transit and passenger rail community. DHS also stated that TSA intends to ensure compliance with the contract elements by “phasing in PT-ISAC contributions and requirements to achieve maximum effectiveness.” TSA’s stated plan for ensuring compliance with contract elements appears to be a positive step. However, DHS’s response did not indicate the specific steps that will be taken to ensure that the PT-ISAC fulfills its responsibilities and completes agreed-upon tasks. Taking such action would more fully address our recommendation. In regards to our third recommendation that DHS take steps to ensure that public transit agencies can access and readily utilize HSIN and that the HSIN-PT subportal contains security-related information that is of value to public transit agencies, DHS stated that it supports changes to HSIN and the intensification of efforts to expand its use for the broader range of transit and passenger rail agencies. DHS also stated that in fiscal year 2010, the HSIN program increased its efforts to raise the awareness of HSIN through a targeted marketing strategy. DHS also stated that the HSIN program’s requirements management process and operator representation on the HSIN Mission Operators Committee governance board will ensure that public transit sector requirements are assessed, prioritized, and implemented. While DHS’s reported efforts to expand HSIN use with the public transit community are noteworthy, in order to meet the full intent of our recommendation, DHS should also take steps to ensure that public transit agencies can readily access and use HSIN, as we recommended. Additionally, DHS did not clearly identify the actions it will take to ensure that the HSIN-PT subportal contains security-related information that is of value to public transit agencies. Identifying and implementing such steps would be necessary to fully address the intent of our recommendation. With regard to our fourth recommendation that DHS establish time frames for developing goals and related outcome-oriented performance measures specific to the PT-ISAC, HSIN-PT, and TS-ISAC, DHS agreed that developing outcome-oriented measures for information sharing is important. Specifically, DHS stated that TSA will work with DHS, APTA, and the PT-ISAC to develop a series of goals and measures to assess the effectiveness of its information-sharing efforts. DHS added that these measures, once developed, can be expected to evolve and improve over time as systematic improvements are made. DHS plans to share the developed measures with its stakeholders to obtain their comments. In order to meet the full intent of our recommendation, DHS should establish time frames for developing such goals and measures. Concerning our fifth recommendation that DHS develop a process for systematically gathering feedback on public transit agencies’ satisfaction with the PT-ISAC and HSIN-PT, DHS stated that updates to HSIN will enable the department to efficiently capture user feedback. DHS also stated that it would need to collaborate with TSA and DOT as well as industry stakeholders to develop additional stakeholder feedback mechanisms. DHS also noted that is will continue to obtain stakeholder feedback through its survey on the TS-ISAC subportal. While the development of the customer survey on the TS-ISAC is an important step in obtaining feedback on the satisfaction of this mechanism, DHS should ensure that its process for gathering feedback on public transit agencies’ satisfaction with the PT-ISAC and HSIN-PT is systematic, as we recommended. Taking such action is necessary to fully address this recommendation. We are sending copies of this report to the Secretaries of Homeland Security and Transportation, and the Attorney General. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4379 or lords@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This report addresses the following questions: (1) What mechanisms has the federal government established or funded as primary information- sharing sources for public transit agencies? (2) To what extent are public transit agencies satisfied with federal efforts to share security-related information, and how, if at all, can these efforts be improved? (3) To what extent has the Department of Homeland Security (DHS) identified goals for sharing security-related information with public transit agencies and developed measures to gauge its progress in meeting those goals? To identify the mechanisms established or funded by the federal government to serve as primary information sources for public transit agencies, we reviewed and assessed relevant documentation, such as the Homeland Security Information Network (HSIN) Program Management Plan, and interviewed officials from DHS components including the Office of Infrastructure Protection (IP) within the National Protection and Programs Directorate (NPPD), the Office of Intelligence and Analysis (I&A), the U.S. Coast Guard, and the Transportation Security Administration (TSA), as well as officials from the Federal Transit Administration (FTA) and the Federal Bureau of Investigation (FBI) to discuss the mechanisms they use to share security-related information with public transit agencies. We also conducted site visits, or held teleconferences, with security and management officials from a nonprobability sample of 27 public transit agencies across the nation to determine which mechanisms are most routinely used by these agencies to obtain security-related information. These transit agencies were selected to generally reflect the variety of transit agencies in terms of size, location, transportation mode, and law enforcement presence and represent about 63 percent of the nation’s total public transit ridership based on information we obtained from FTA’s National Transit Database. Because we selected a nonprobability sample of transit agencies to interview, the information obtained cannot be generalized to the overall population of transit agencies. However, the interviews provided illustrative examples of the perspectives of various transit agencies about federal government information-sharing mechanisms and corroborated information we gathered through other means. Table 8 lists the public transit agencies we interviewed. To assess the satisfaction of public transit agencies with federal security- related information- sharing efforts and related opportunities for improvement, in March and April 2010, we surveyed 96 of the of the 694 U.S. public transit agencies as of 2008, by ridership statistics, on their satisfaction with information-sharing efforts. The 96 public transit agencies surveyed represent about 91 percent of total 2008 ridership. For the purposes of this survey, we defined the six aspects of quality security- related information as (1) relevance (i.e., is the information sufficiently relevant to be of value to a public transit agency?); (2) validity (i.e., is the information accurate?); (3) timeliness (i.e., is information received in a timely manner?); (4) completeness (i.e., does the information contain all the necessary details?); (5) actionability (i.e., would the information allow a public transit agency to change its security posture, if such a change was warranted?); and (6) access/ease of use (i.e., is information available through this mechanism easy to obtain?). To develop the survey instrument, we conducted pretest interviews with four public transit agencies and obtained input from GAO experts. Out of the original population of 96 transit agencies, we received completed questionnaires from 80 respondents—a response rate of 83 percent; however, not all respondents provided answers to every question. The final instrument, reproduced in an e-supplement we are issuing concurrent with this report—GAO-10-896SP—displays the counts of responses received for each question. The questionnaire asked those public transit officials responsible for security operations to identify the modes of transportation they provide, the extent to which they house their own law enforcement component, the mechanisms they use to obtain security information, and their satisfaction with each of these mechanisms. While we surveyed 96 agencies of the largest U.S. public transit agencies, and thus our data are not subject to sampling error, the practical difficulties of conducting any survey may introduce other errors in our findings. We took steps to minimize errors of measurement, nonresponse, and data processing. In addition to the questionnaire development and testing activities described above, we made multiple follow-up attempts by E-mail and telephone to reduce the level of nonresponse throughout the survey period. Finally, analysis programs and other data analyses were independently verified. To further address this question, we assessed relevant documentation, including interagency agreements between TSA and FTA, as well as marketing materials on the Transportation Security Information Sharing and Analysis Center (TS-ISAC). We also interviewed American Public Transportation Association (APTA), Public Transportation Information Sharing and Analysis Center (PT-ISAC), TSA, FBI, FTA, and DHS Operations, Coordination, and Planning Directorate officials to discuss efforts to streamline existing information-sharing mechanisms, oversee the results of the PT-ISAC, and conduct outreach on various information- sharing mechanisms. We compared these efforts to internal control standards, as well as our previous work on the need to consolidate redundant information systems and target outreach efforts. In addition, we interviewed select public transit agencies and included questions in our Web-based survey of public transit agencies on the various information-sharing mechanisms available to them. To assess the extent to which DHS has identified goals for sharing information with public transit agencies and developed measures to gauge its progress in meeting those goals, we reviewed DHS’s Annual Performance Report, TSA’s Transportation Security Information Sharing Plan (TSISP), and available performance data and measures for fiscal years 2007 through 2010 related to information-sharing efforts with public transit agencies and compared them to leading management practices and our previous work on program assessments. We also interviewed relevant DHS and TSA officials to obtain information on their efforts to revise and develop performance measures and goals for this area of information sharing, as well as their efforts to obtain feedback from public transit agencies on their satisfaction with the security-related information they receive. In addition, we compared TSA’s efforts to evaluate their information-sharing efforts with guidance on performance measurement contained in our previous reports. We conducted this performance audit from August 2009 through September 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Since the terrorist attacks on September 11, 2001, the federal government has developed strategies to enhance the sharing of terrorism-related information among federal, state, local, and tribal agencies, and the private sector. These strategies include the following: National Strategy for Information Sharing: Issued in October 2007, this strategy identifies the federal government’s information sharing responsibilities. These responsibilities include gathering and documenting the information that state, local, and tribal agencies need to enhance their situational awareness of terrorist threats. The strategy also calls for authorities at all levels of government to work together to obtain a common understanding of the information needed to prevent, deter, and respond to terrorist attacks. Specifically, the strategy discusses the need to improve the two-way sharing of terrorism-related information on incidents, threats, consequences, and vulnerabilities, including enhancing the quantity and quality of specific, timely, and actionable information provided by the federal government to critical infrastructure sectors. DHS Information Sharing Strategy: Issued in April 2008, this strategy describes the guiding principles for DHS’s efforts to share information within the department, across the federal government, and with state, local, tribal, territorial, private sector, and international partners. Among other things, the strategy notes that DHS must take steps to ensure that the right information gets to the right people at the right time. The strategy also discusses the department’s need to institute performance measures to provide an accurate assessment of the department’s progress towards meeting its information-sharing goals. The National Infrastructure Protection Plan (NIPP): Updated in 2009, the NIPP is intended to provide the framework for a coordinated national approach to address the full range of physical, cyber, and human threats and vulnerabilities that pose risks to the nation’s critical infrastructure. Among other things, the NIPP names TSA as the primary federal agency responsible for coordinating critical infrastructure protection efforts within the transportation sector and emphasizes the importance and benefits of sharing security-related information with critical sector partners. Transportation Security Information Sharing Plan (TSISP): Established by TSA in July 2008 pursuant to the 9/11 Commission Act and subsequently updated in December 2009. The stated purpose of the TSISP is to establish a foundation for sharing transportation security information between all entities that have a stake in protecting t transportation system, including federal, state, local, and tribal agencies and governments, the private sector, and foreign partners. Surface Transportation Security Priority Assessment: Issued in March 2010 by the Administration’s Transborder Security Interagency Policy Committee, Surface Transportation Subcommittee. The study identified 10 issue areas to examine, obtained input from surface transportation sector stakeholders, and analyzed the responses to reach a consensus set of priorities and recommendations related to surface transportation. Among other things, the assessment included a recommendation that that TSA collaborate with DHS and the Department of Transportation (DOT) to more effectively share transportation security information. The table below illustrates, for the public transit agencies we surveyed, the general satisfaction along 6 quality dimensions with the 12 most frequently-cited information-sharing mechanisms. The quality dimensions rated for level of satisfaction were: relevance (i.e., is the information sufficiently relevant to be of value to a public transit agency?); validity (i.e., is the information accurate?); timeliness (i.e., is information received in a timely manner?); completeness (i.e., does the information contain all the necessary details?); actionability (i.e., would the information allow a public transit agency to change its security posture, if such a change was warranted?); and access/ease of use (i.e., is information available through this mechanism easy to obtain?). The numbers in parentheses below each mechanism represent the number of agencies in our survey that indicated they use this mechanism to receive security-related information. For each mechanism and quality dimension, the table indicates (1) the number of agencies that indicated they were either “very satisfied” or “somewhat satisfied” with the information they receive through the mechanism (or, in the case of “access / ease of use,” the mechanism itself); (2) the total number of agencies that provided a response to the question; and (3) the percentage of responding agencies that were generally satisfied. The mechanisms are organized in the order they were presented in the survey. In addition to the contact named above, Jessica Lucas-Judy, Assistant Director, managed this assignment. Vanessa Dillard, Jeff C. Jensen, Nancy Meyer, Octavia Parks, and Meg Ullengren made significant contributions to the work. Tracey King provided significant legal support and analysis. Stanley J. Kostyla assisted with design and methodology. Carl Ramirez and Joanna Chan assisted with the survey design, implementation, and data analysis. Christopher Currie, Lara Miklozek, and Debbie Sebastian provided assistance in report preparation. Tina Cheng and Robert Robinson developed the report graphic.
The Transportation Security Administration (TSA), in the Department of Homeland Security (DHS), is committed to sharing information with public transit agencies. The Implementing Recommendations of the 9/11 Commission Act directed GAO to report on public transit information sharing. This report describes (1) the primary mechanisms used to share security information with public transit agencies; and evaluates (2) public transit agencies' satisfaction with federal efforts to share security-related information (e.g., security threats) and opportunities to improve these efforts; and (3) the extent to which DHS has identified goals and measures for sharing information. GAO surveyed 96 of the 694 U.S. public transit agencies based on 2008 ridership and received 80 responses. The 96 public transit agencies surveyed represent about 91 percent of total 2008 ridership. GAO also reviewed documents, such as DHS's Information Sharing Strategy, and interviewed agency officials. According to the American Public Transportation Association (APTA)--which represents the public transit industry--and TSA officials, the Public Transportation Information Sharing and Analysis Center (PT-ISAC) and the public transit subportal on DHS's Homeland Security Information Network (HSIN-PT) were established as primary mechanisms for sharing security-related information with public transit agencies. The public transit agencies GAO surveyed also cited additional mechanisms for obtaining such information, including other public transit agencies. Further, in March 2010 TSA introduced the Transportation Security Information Sharing and Analysis Center (TS-ISAC), which is a subportal on HSIN focused on sharing security-related information with transportation stakeholders. Seventy-five percent of the public transit agencies GAO surveyed reported being generally satisfied with the security-related information they received; however, federal efforts to share security-related information could be improved. Specifically, three-fourths of public transit agencies reported being either very satisfied or somewhat satisfied with the information they received. Public transit agencies also reported that among the 12 most frequently cited mechanisms, they were the least satisfied with HSIN in terms of general satisfaction (19 of 33) and for each of six dimensions of quality--relevance, validity, timeliness, completeness, actionability, and ease of use. Twenty-four survey respondents also cited the need to streamline the information they received. GAO identified the potential for overlap between the PT-ISAC, the HSIN-PT, and the TS-ISAC, which all communicate similar unclassified and security-related information to public transit agencies. Federal and transit industry officials that GAO interviewed reported the need to streamline information sharing. Moreover, a greater proportion of survey respondents who were unaware of the PT-ISAC or HSIN were from midsize agencies, nonrail agencies, and those without their own police department. Federal and industry officials formed a working group to assess the effectiveness of information-sharing mechanisms, including developing options for streamlining these mechanisms. TSA officials stated that these options will also impact future outreach activities; however, no time frame has been established for completing this effort. Establishing such a time frame could help to ensure that this effort is completed. DHS and TSA have established goals and performance measures for some of their information-sharing activities to help gauge the effectiveness of their overall information-sharing efforts; however, they have not developed goals and outcome-oriented measures of results of activities for the mechanisms established as primary information sources for the public transit industry. TSA officials acknowledged the importance of establishing such goals and measures, but were unable to provide time frames for doing so. Establishing time frames for developing goals and outcome measures, once the working group effort is complete, could assist TSA in gauging the effectiveness of its efforts to share information with public transit agencies. GAO recommends that DHS, among other things, (1) establish time frames for its working group to develop options for improving information sharing, including assessing opportunities to streamline mechanisms and conducting targeted outreach; and (2) establish time frames for developing goals and outcome-oriented measures of results. DHS concurred. GAO is issuing an electronic supplement with this report--GAO-10-896SP--which provides survey results.
WMATA was created in 1967 by an interstate compact that resulted from the enactment of identical legislation by Virginia, Maryland, and the District of Columbia, with the concurrence of the U.S. Congress. Since then, WMATA has been responsible for planning, financing, constructing, and operating a comprehensive mass transit system for the Washington metropolitan area. WMATA began building its Metrorail system in 1969, acquired four regional bus systems in 1973, and began operating the first phase of Metrorail operations in 1976. In January 2001, WMATA completed the originally planned 103-mile Metrorail system that now includes 83 rail stations on 5 rail lines. WMATA operates in a complex environment, with many organizations influencing its decision-making and funding and providing oversight. WMATA is governed by a Board of Directors, which sets policies and oversees all of WMATA’s activities, including budgeting, operations, development and expansion, safety, procurement, and other activities. In addition, a number of local, regional, and federal external organizations affect WMATA’s decision-making, including: (1) state and local governments, which subject WMATA to a range of laws and requirements; (2) the Tri-State Oversight Committee, which oversees WMATA’s safety activities and conducts safety reviews; (3) the National Capital Region Transportation Planning Board (TPB) of the Metropolitan Washington Council of Governments, which develops the short- and long-range plans that guide WMATA’s capital investments; (4) the Federal Transit Administration (FTA), which provides oversight of WMATA in many areas; and (5) the National Transportation Safety Board, which investigates accidents on transit systems as well as other transportation modes. WMATA estimates that its combined rail and bus ridership will total 324.8 million passenger trips in fiscal year 2001, making it the second largest heavy rail rapid transit system and the sixth largest bus system in the United States, according to WMATA officials. WMATA’s proposed fiscal year 2002 budget totals nearly $1.9 billion. Of the total amount, about 56 percent, or $1.06 billion, is for capital improvements; 42 percent, or $796.6 million, is for operations and maintenance activities; and the remaining 2 percent, or $37 million, is for debt service and other projects. WMATA’s funding comes from a variety of federal, state, and local sources. Unlike most other major urban transit systems, WMATA does not have dedicated sources of revenues, such as local sales tax revenues, that are automatically directed to the transit authority. WMATA receives grants from the federal government and annual contributions by each of the local jurisdictions that WMATA serves, including the District of Columbia and the respective local jurisdictions in Maryland and Virginia. For example, in its fiscal year 2002 proposed operating budget totaling $796.6 million (for rail, bus, and paratransit services), WMATA projects that approximately 55 percent of its revenues will come from passenger fares and other internally generated revenues, and 45 percent will come from the local jurisdictions served by WMATA. With regard to its capital program for infrastructure renewal, WMATA projects that about 47 percent of its proposed 2002 budget will come from federal government grants, 38 percent from federally guaranteed financing, and 15 percent from the local jurisdictions and other sources. WMATA has also received funding directly through the congressional appropriations process over the past 30 years— totaling about $6.9 billion—for construction of the originally planned subway system. WMATA did not have to compete against other transit agencies for this funding, which ended in fiscal year 1999. Metrorail’s expenses and revenues represent the largest portion of WMATA’s operating budget. For example, in fiscal year 2000—the latest year for which final actual figures are available—Metrorail’s operating expenses accounted for 56 percent, or $392.1 million, of WMATA’s overall operating costs of $704.8 million. At the same time, Metrorail’s passenger fares and other revenues accounted for about 76 percent, or $292.5 million, of WMATA’s overall internally generated revenues of $384.9 million. As a measure of financial performance, Metrorail’s cost recovery ratio (revenues divided by expenses) represents one of the highest of any rail transit system in the nation, according to FTA. For example, during fiscal years 1996 through 2000, Metrorail recovered, on average, 73 cents for every dollar that WMATA spent to operate and maintain the rail system. With regard to capital investment issues, GAO issued a report in December 1998 that identified capital decision-making principles and practices used by outstanding state and local governments and private sector organizations. In order to evaluate the extent to which WMATA followed best practices in planning, selecting, and budgeting for its capital investments, we compared WMATA’s practices with those of leading public and private organizations that we studied in 1998. Accordingly, in this report, we assess the extent to which WMATA (1) integrates its organizational goals into the capital decision-making process through structured strategic planning and needs determination processes, (2) uses an investment approach to evaluate and select capital assets, and (3) maintains budgetary control over its capital investments. One of the key operating challenges facing Metrorail has been the increasing problems caused by the advancing age of its existing infrastructure. Metrorail has experienced vehicle, escalator, elevator, and other system equipment and infrastructure problems over the past several years. These problems have resulted in, among other things, an increasing number of train delays. For example, the number of train delays due to system problems increased from 865 in fiscal year 1996 to 1,417 in fiscal year 2000, or by about 64 percent. WMATA attributes these problems primarily to its aging rail equipment and infrastructure. Forty-five percent of Metrorail’s 103-mile system is from 17 to 25 years old, and another 33 percent is from 9 to 16 years old. Similarly, 39 percent of Metrorail’s 762- car fleet has been operating since 1976; another 48 percent went into service during the 1980s. WMATA has estimated that the expected useful life of a rail car is 40 years if a major renovation is performed at the mid- point of the car’s life cycle. WMATA is addressing Metrorail’s equipment and infrastructure problems through a number of projects in its capital-funded Infrastructure Renewal Program (IRP), described in detail later in this letter. One key IRP project—the Emergency Rail Rehabilitation Program—is focused on improving Metrorail’s service reliability problems. Through this program, now in its second year, WMATA has made significant progress in implementing many rail system improvement projects. For example, by August 2000, WMATA had completed almost all of the program’s accelerated car maintenance projects on such critical components as brakes and doors on over 600 rail cars. In addition, WMATA’s statistics show that for the period covering July 2000 through January 2001, the number of passenger offloads had decreased by 15 percent, compared with the same period in the previous year. In particular, WMATA officials noted that offloads during the spring “Cherry Blossom Season” in the metropolitan Washington, D.C., area, decreased, on average, from 9 per weekday in 1999 to 4.8 per weekday in 2001. Furthermore, by June 2000, work was under way to maintain and rehabilitate 170 station escalators. IRP includes other key projects, such as the rail car rehabilitation project, which will enhance the reliability of 364 cars that were built in the 1980s. These cars will be overhauled and rehabilitated under a 5-year contract awarded in December 2000. WMATA expects to take delivery of the first rehabilitated cars in August 2002. Metrorail also faces another significant operating challenge brought about by ever-increasing ridership. Metrorail is now operating at near capacity during peak demand periods, causing some uncomfortably crowded trains. WMATA’s recent studies on crowding found that demand has reached and, in some cases, exceeded scheduled capacity—an average of 140 passengers per car—during the peak morning and afternoon hours. For example, of the more than 200 peak morning trips that WMATA observed over a recent 6-month period, on average, 15 percent were considered “uncomfortably crowded” (125 to 149 passengers per car), and 8 percent had “crush loads” (150 or more passengers per car). Metrorail’s overcrowded conditions are primarily the result of the substantial growth in ridership it has experienced over the last several years, an insufficient number of rail cars to operate more and longer trains on a regular basis, and system and other constraints on expanding rush-hour trains from six cars to eight cars—the maximum size that station platforms can accommodate. WMATA has several actions under way to ease Metrorail’s overcrowded conditions. Most notably, the agency ordered 192 new rail cars that it had expected to begin deploying in the summer of 2001. We note, however, that WMATA suffered a setback in June 2001 when it took action to delay delivery of these cars until the rail car contractor corrects technical problems. As of late June 2001, WMATA officials told us that they expect to begin phasing the first new cars into service by the fall of 2001. Over the next year or so, WMATA plans to deploy the majority of these cars where and when the heaviest ridership is occurring, allowing for adjustments to train sizes. For example, on some lines, the train size will change from four cars to six cars. WMATA is also examining Metrorail’s core capacity needs to determine, among other things, what improvements in capacity—cars and power, for example—will be required to operate eight-car trains on a regular basis during peak demand periods. WMATA expects to complete this study in the fall of 2001. Finally, Metrorail’s maintenance and repair shop capacity could be challenged as early as the fall of 2001 with the delivery of the first group of new rail cars. Depending on the number of cars that can be repaired outside of the shops, WMATA could need up to 126 repair shop spaces, or 12 more than the 114 spaces that would be available for scheduled maintenance and unscheduled repairs at that time. Furthermore, Metrorail’s repair shop capacity may be exhausted and could become even more of a problem after the fall of 2002, when delivery of the remaining new cars is expected to be completed. In addition, WMATA plans to acquire a total of at least 94 additional rail cars to accommodate new revenue service on the Largo extension to the Blue Line in Maryland (which is currently under construction); increased demand on the Orange Line in Virginia due to service expansion; and service growth on other existing rail lines, thus adding to the maintenance and repair shop capacity problem. Although WMATA officials believe that the agency’s current shop capacity may not be favorable for the expeditious turnaround of vehicles requiring maintenance and repair, they pointed out that they are taking steps to ease the capacity problem. For example, in the near term, WMATA has four “blow down pits”—spaces in its largest shops used to clean the underside of a car prior to its scheduled maintenance—that can also be used for maintenance and repair. In addition, WMATA plans to open a new facility in 2002 that will expand its current shop capacity to accommodate 126 rail cars. At the same time, however, WMATA recognizes that it currently does not have the capacity to maintain and repair the additional cars for the Largo extension. WMATA is taking two actions to address this problem. First, WMATA is surveying its existing shops to determine whether their capacity can be expanded. The agency expects to complete the survey in the fall of 2001, possibly beginning expansion efforts as early as 2002. Second, WMATA plans to build a new repair shop in the Dulles Corridor. However, this facility would not be available until about 2010, when construction of the Dulles Corridor extension is to be completed. WMATA has established programs to address safety and security risks that affect its rail and bus systems. WMATA’s safety program has evolved since the mid-1990s, when a series of rail accidents and incidents led to several independent reviews that cited the need for program improvements. For example, in 1997, FTA reported the results of a safety review it performed of WMATA’s rail activities in response to several serious accidents and incidents that occurred in 1996. The review concluded that WMATA had not adequately maintained a planned approach to safety program tasks or dedicated appropriate financial and personnel resources to accomplish these tasks. In addition, FTA found that WMATA’s safety efforts had been weakened by frequent changes in the organizational reporting level of its safety department and a deemphasis of safety awareness in public and corporate communications. The review also found that WMATA’s safety department had been moved from place to place in the organization, making its work difficult, its priorities uncertain, and its status marginal. Under a newly formed state safety oversight program, the leadership of a new General Manager, and a new bus transit safety program, WMATA has responded to these criticisms by upgrading and enhancing its safety activities. For example, the current General Manager made safety a priority by reviewing the transit authority’s safety function and revising its system safety program plan, which contains detailed protocols for identifying and assessing hazards. WMATA’s safety plan also includes requirements for identifying, evaluating, and minimizing safety risks throughout all elements of the WMATA rail and bus systems. The plan also identifies management and technical safety and fire protection activities to be performed during all phases of bus and rail operations. In addition, WMATA’s current General Manager delegated specific safety responsibilities to the transit agency’s Chief Safety Officer who reports directly to the General Manager and is now responsible for (1) managing system safety, occupational safety and health, accident and incident investigation, and fire protection; (2) overseeing construction safety and environmental protection; and (3) monitoring the system safety program plan. By elevating its internal safety organization and increasing its emphasis on safety activities, WMATA has given safety a higher degree of attention and priority. More recently, following a serious tunnel fire in 2000, WMATA created a safety task force to review its operations control center’s handling of the incident. In addition, WMATA’s General Manager asked the American Public Transportation Association (APTA) to conduct a comprehensive peer review of the transit agency’s emergency procedures for handling tunnel fires. APTA’s findings and recommendations, in several ways, confirmed the findings identified in WMATA’s internal investigation. For instance, both investigations supported the need for efforts to formalize and strengthen training for operations control center personnel and ensure that emergency procedures are addressed in the training and certification of operations staff. The two reviews made 32 recommendations concerning, among other things, communications policy and training. At the time of our review, WMATA had taken actions to implement 30 of the 32 recommendations, including providing training to its staff on communicating more effectively with fire authorities and opening a fire training center for WMATA employees and local firefighters. WMATA is in the process of addressing the other two recommendations. Despite a recent rise in the number of rail and bus safety incidents, which WMATA attributes to the large increase in rail and bus ridership and the recent hiring of many new bus drivers, APTA and FTA now believe that WMATA has a “very good” safety program as evidenced by the low injury rates on both its rail and bus systems. For example, WMATA has experienced low injury rates in its rail stations over the last 5 years—on average, only .37 injuries per 1 million passenger miles. Very few of these injuries were serious or fatal. However, the absolute number of rail station injuries increased from 366 in fiscal year 1999 to 474 in fiscal year 2000, and the rail station injury rate increased from 0.34 to 0.43 for the same 2 years. WMATA documents also show that about 50 percent of all rail injuries occurred on escalators. According to WMATA’s Chief Safety Officer, the root cause of the majority of these incidents is mainly human factors, not equipment failure, employee performance, or unsafe conditions. In fiscal years 1999 and 2000, for example, WMATA’s records show that no escalator incidents were caused by electrical or mechanical failure or unsafe conditions. WMATA is promoting escalator safety by conducting public awareness campaigns and adding safety devices. Similar to his initiatives affecting WMATA’s safety program and plan, WMATA’s General Manager has delegated authority to WMATA’s Chief of Police to plan, direct, coordinate, implement, and evaluate all police and security activities for the transit agency. WMATA’s Chief of Police heads the Metro Transit Police Department, which has an authorized strength of 320 sworn and 103 civilian personnel. The Department has jurisdiction and arrest powers on WMATA property throughout the 1,500 square mile transit zone that includes Maryland, Virginia, and the District of Columbia. WMATA’s Metro Transit Police Department addresses security through its system security program plan, participates in external security reviews, and collects and evaluates crime statistics. To emphasize the importance of system security, the Department established a set of comprehensive security activities in its system security program plan. The plan is designed to maximize the level of security experienced by passengers, employees, and other individuals who come into contact with the transit system; to minimize the cost associated with the intrusion of vandals and others into the system; and to make the transit system more proactive in preventing and mitigating security problems. WMATA has also participated in FTA’s voluntary transit security audit program, and FTA officials have concluded that WMATA’s overall security program demonstrates a high level of attention to passenger and employee security. WMATA statistics indicate that serious crimes such as homicide and rape occur rarely on the transit system. During the period from 1996 through 2000, no rapes occurred, and there were two murders in the system. Most of the crimes committed in the transit system are far less serious, such as disorderly conduct and trespassing. More of the crimes are committed in the system’s parking lots than on the rail and bus system, and more crimes are committed on the rail system than on the buses. Some crimes, such as motor vehicle theft and robbery, increased somewhat from 1999 to 2000. To address those increases and the problem of crime in its parking lots, WMATA has increased undercover patrols of parking lots and rail stations. WMATA operates in a complex environment that makes capital decision- making difficult. For example, unlike most other major urban transit systems, WMATA does not have a dedicated revenue source to fund its capital programs, thus subjecting the agency to the appropriations processes of the federal, state, and local governments that fund its programs. In addition, WMATA’s General Manager and staff must achieve consensus and obtain final approvals for the agency’s capital projects from many organizations and government levels, including its own Board of Directors; numerous local and state jurisdictions within the District of Columbia, Maryland, and Virginia that the transit agency serves; the TPB of the Metropolitan Washington Council of Governments; the Federal Transit Administration; and the U.S. Congress, which has provided WMATA with funding over the years to build its Metrorail system. In spite of these challenges, WMATA has incorporated some of the best capital investment practices followed by leading public and private sector organizations. We believe that WMATA could benefit by building on those practices by formalizing some aspects of its capital decision-making process and expanding its strategic and capital planning efforts. WMATA created a Capital Improvement Program in November 2000 to consolidate its ongoing and planned capital improvement activities. This program has three elements to address all aspects of the agency’s capital investments, including (1) an Infrastructure Renewal Program (IRP) for system rehabilitation and replacements, (2) a System Expansion Program (SEP), and (3) a System Access and Capacity Program (SAP). First, IRP is designed to rehabilitate or replace WMATA’s existing assets, including rail cars, buses, maintenance facilities, tracks, and other structures and systems. IRP is estimated to cost $9.8 billion over the next 25 years. Secondly, SEP is designed to expand fixed guideway services, selectively add stations and entrances to the existing Metrorail system, and improve bus service levels and expand service areas. WMATA has not yet estimated the total costs associated with its planned SEP projects. Third, SAP— which is estimated to cost about $2.5 billion over the next 25 years—was established to improve access to and the capacity of the transit system by providing additional rail cars and buses, parking facilities, and support activities to accommodate ridership growth. It also includes the study to determine the modifications needed to Metrorail’s core capacity to sustain current and future ridership volumes. WMATA expects to complete this study in the fall of 2001. In our December 1998 report, GAO identified capital decision-making principles and practices used by outstanding state and local governments and private sector organizations. In order to evaluate the extent to which WMATA followed best practices in planning, selecting, and budgeting for its capital investments, we compared WMATA’s practices with those of the leading public and private organizations that we studied in 1998. Accordingly, in this report, we assess the extent to which WMATA (1) integrates its organizational goals into the capital decision-making process through structured strategic planning and needs determination processes, (2) uses an investment approach to evaluate and select capital assets, and (3) maintains budgetary control over its capital investments. Table 1 describes the best practices that were applied within each of these three areas, which the 1998 GAO report categorized as “principles” used by leading organizations to make capital investment decisions. In our December 1998 report, we found that leading organizations begin their capital decision-making process by defining their overall mission in comprehensive terms and multiyear goals and objectives. This enables managers to identify the resources needed to satisfy the organization’s program requirements on the basis of the program’s goals and objectives. To do this, an organization must have identified its mission and goals through a strategic planning process. To assist with identifying any gap between an organization’s resource needs and its existing capital capabilities, leading organizations maintain systems that capture and report information on existing assets and facilities. This information is frequently updated and accessible to decisionmakers when needed. Leading organizations also consider a full range of possible ways to achieve the organization’s goals and objectives, including examining both capital and noncapital alternatives. WMATA has articulated an overall organizational mission statement and a goal of doubling ridership by the year 2025 and is beginning to develop a business planning process. It has not, however, fully developed a strategic planning process that defines multiyear goals and objectives and clearly links its project outcomes—including capital projects—to achieving those goals and objectives. In particular, WMATA has not developed a formal strategic plan that defines multiyear goals and objectives for the agency, nor does it have annual performance plans that explain the specific ways in which WMATA will attempt to achieve those goals and objectives. WMATA has completed a comprehensive assessment of its infrastructure renewal requirements, and it is in the process of assessing its system capacity requirements. With regard to its System Expansion Program, however, it has not conducted a comprehensive needs assessment, although it does consider regional transportation needs, costs, and benefits before deciding to support proposed expansion projects. For example, WMATA has established a “Project Development Program” to develop conceptual designs, “order of magnitude” cost estimates, and other information on some of the proposed projects contained in the expansion program. WMATA plays a limited role in analyzing and evaluating alternatives for meeting its system expansion needs. This limited role stems from its relationships with (1) TPB, which plays a key role in developing, coordinating, and approving plans for all regional transportation needs and alternatives including transit, highways, and other transportation modes; and (2) the state and local jurisdictions served by WMATA, which have the lead role in identifying and evaluating transit expansion alternatives within a specific “corridor” or subarea of the Washington metropolitan area. After leading organizations identify their strategic goals and objectives and assess alternative ways of meeting their capital needs, they go through a process of evaluating and selecting capital assets using an investment approach. An investment approach builds on an organization’s assessment of where it should invest its resources for the greatest benefit over the long term. Establishing a decision-making framework that encourages the appropriate levels of management review and approval is a critical factor in making sound capital investment decisions. These decisions are supported by the proper financial, technical, and risk analyses. Leading organizations not only establish a framework for reviewing and approving capital decisions, they also have defined processes for ranking and selecting projects. Furthermore, they also develop long-term capital plans that are based on the long-range vision for the organization embodied in its strategic plan. WMATA has incorporated several elements of an investment approach to evaluating and selecting capital improvement projects, but the agency could benefit from a more formal, disciplined decision-making framework. With regard to its program for infrastructure renewal, WMATA officials told us that all appropriate managers were involved in deciding which projects should be selected after a comprehensive needs assessment was performed in March 1999. WMATA also performed a one-time ranking of those projects on the basis of preestablished criteria, including asset function, condition, and other factors. However, WMATA has not established a formal executive-level review group within the agency for making decisions on capital projects, nor does it have formal procedures or a standard decision package for considering the relative merits of its capital projects each year. Also, WMATA officials told us that they play a relatively small role in proposing, evaluating, and selecting system expansion projects. They said that the decisions on such projects are generally driven by the state and local jurisdictions sponsoring the projects. WMATA has contacted state and local transportation executives from Maryland, Virginia, and the District of Columbia to explore ways to increase WMATA’s involvement in conducting alternatives analyses for system expansion projects, thereby increasing its influence on those decisions. Furthermore, although WMATA has performed a comprehensive assessment of infrastructure renewal requirements and has taken a first step in outlining system expansion needs, it has not developed a comprehensive long-term capital plan that defines and justifies its internal capital asset decisions for all of the capital projects falling within WMATA’s Capital Improvement Program. Such a plan would allow WMATA to define its strategy and justification for selecting each capital project and would provide baseline information on each project’s life-cycle costs and schedules, performance requirements, benefits, and risks. A more formal long-term capital planning process allows an organization to establish priorities and assist with developing current and future budgets. A well-thought-out review and approval framework can also mean that capital investment decisions are made more efficiently and are supported by better information. Furthermore, were WMATA to develop a more disciplined decision-making framework—with documented support for the alternatives that WMATA favors—the agency would potentially have more influence with the federal government and state and local jurisdictions that ultimately decide whether to provide funding for projects. Finally, officials at leading organizations that GAO studied agreed that good budgeting requires that the full life-cycle costs of a project be considered when an organization is making decisions to provide resources. This practice permits decisionmakers to compare the long-term costs of spending alternatives and to better understand the budgetary and programmatic impact of decisions. Most of those organizations make a commitment to the full cost of a project up front and have developed alternative methods for maintaining budgetary control while allowing flexibility in funding. One strategy they use is to budget for and provide advance funding sufficient to complete a useful segment of a project. A useful segment is defined as a component that (1) provides information that allows an agency to fully plan a capital project before proceeding to full acquisition or (2) results in a useful asset for which the benefits exceed the costs even if no further funding is appropriated. Another strategy used by some leading organizations is to use innovative financing techniques that provide new sources of funding or new methods of financial return. WMATA uses many of the funding strategies followed by leading organizations. For example, to comply with requirements imposed by FTA and its predecessor agencies, WMATA completed its Metrorail system by negotiating for funding in useful or “operable” segments. Furthermore, the agency has used a wide variety of innovative capital financing techniques to fund its Capital Improvement Program (CIP) and operations activities and to leverage its capital assets to generate additional income. However, WMATA faces a number of uncertainties in obtaining the funding it believes it needs to meet its capital requirements, and the agency has not developed plans that describe how it would address large anticipated funding shortfalls in its programs for infrastructure renewal and system capacity. For example, WMATA has not developed alternate scenarios of how such funding shortfalls would be absorbed by the various asset categories under the Infrastructure Renewal Program or by the projects identified under the System Access and Capacity Program. The funding shortfalls are anticipated to total $3.7 billion over the next 25 years and represent an average annual shortfall of about $150 million for both programs. Furthermore, the budget shortfall could significantly increase when WMATA completes its ongoing assessment of Metrorail’s core capacity in the fall of 2001. Our review showed that WMATA has identified the operational and safety challenges it faces and has established sound policies, programs, and practices to meet those challenges. We note that in the operations and maintenance area, WMATA is in some ways a “victim” of its own success in that its challenges have largely resulted from ever-increasing passenger ridership demands, along with the inevitable aging of its equipment and infrastructure. In the safety and security area, WMATA has established programs to identify, evaluate, and minimize risks throughout its bus and rail systems; and it has upgraded its safety organization in recent years to improve its internal management and oversight of safety problems. WMATA has low incident rates of injury and serious crime on both its rail and bus systems. As a result, WMATA is viewed by outside organizations, such as FTA and APTA, as having very good safety and security programs. To address its long-term capital needs, WMATA has established a Capital Improvement Program that incorporates some of the best capital investment practices followed by leading public and private sector organizations. However, we believe that WMATA could benefit by building on those practices by formalizing some aspects of its capital decision- making process and by expanding its strategic and capital planning efforts. For example, by developing a multiyear strategic plan and annual performance plans, WMATA could more clearly define its vision, direction, strategies, and priorities—not only for capital planning and decision- making, but for all aspects of its activities. Also, WMATA could benefit from establishing a consolidated capital plan that would allow the agency and its external stakeholders to weigh and balance the need to maintain its existing capital assets against the demand for new assets. A more active role for WMATA in capital planning would provide better information for federal, state, and local decisionmakers that fund WMATA’s projects and would increase WMATA’s influence with them. Similarly, a more formal internal review and approval process for making capital decisions within WMATA—including formal procedures and standard decision packages for considering the relative merits of various capital projects each year—would help WMATA establish priorities, develop budgets, and facilitate periodic reviews of all ongoing and proposed projects. It would also provide greater assurance of continuity within the organization if key managers move to other positions or leave the agency. In addition, WMATA could provide valuable analysis and insights through a more active role in identifying and evaluating alternatives for system expansion projects. Leading organizations consider such alternatives analysis to be a critical factor in making sound capital investment decisions. Because the state and local jurisdictions take the lead in identifying and deciding on expansion projects, WMATA often does not become involved in crucial early decisions about the most appropriate and efficient ways to expand the system. WMATA is exploring ways to increase its involvement in conducting alternatives analyses for system expansion projects, thereby increasing its influence on those decisions. We support WMATA’s efforts in this area. Finally, WMATA has not fully planned how it will address large anticipated funding shortfalls in its programs for infrastructure renewal and system access and capacity. WMATA officials expressed concerns that developing such plans could undermine their efforts to obtain what they believe is the required funding amount for the two capital programs. In our view, however, prudent management requires that the agency identify the steps needed to address any anticipated shortfalls and develop alternate plans for carrying out its capital activities, depending on the level of funding obtained from federal, state, and local sources. To improve the agency’s strategic planning and capital investment practices, we recommend that WMATA’s General Manager and Board of Directors take the following actions: Develop a long-term strategic plan and annual performance plans that clearly define the agency’s multiyear goals and objectives and its specific plans for achieving those goals and objectives. Develop a long-term capital plan that covers all three programs of its newly formed consolidated Capital Improvement Program (Infrastructure Renewal Program, System Expansion Program, and System Access and Capacity Program). This plan should: document WMATA’s capital decision-making strategy and link it to the agency’s overall goals and objectives; define each project’s justification and its baseline lifecycle costs, schedule, performance requirements, benefits, and risks; include alternate funding strategies and project outcomes, depending on the availability of funding from federal, state, and local sources; and be updated annually or biennially. Formalize WMATA’s capital decision-making process for the consolidated Capital Improvement Program by establishing and documenting an internal review and approval framework and standard procedures and decision packages for analyzing and deciding on projects. Develop a process and procedures—in consultation with the TPB and the state and local jurisdictions served by WMATA—for taking a more active role in (1) identifying, analyzing, and evaluating alternatives for expanding WMATA’s transit system; and (2) proposing the most efficient and cost- effective projects for expanding the system. We provided the Department of Transportation and WMATA with draft copies of this report for their review and comment. The Department of Transportation had no comments on the report. WMATA concurred with all of our major recommendations aimed at improving the agency’s strategic planning and capital investment practices and indicated that it has already taken steps to begin implementing some of our recommendations. WMATA did not agree with the subpart of our second recommendation that calls for developing alternative capital funding strategies and project outcomes, depending on the availability of funding from federal, state, and local sources. WMATA cited its concern that developing contingency plans for addressing anticipated budgetary shortfalls would encourage its funding agencies to reduce the level of resources provided to WMATA. We continue to believe, however, that prudent management requires WMATA to plan for such shortfalls, which could be significant after WMATA completes its assessment of Metrorail’s core capacity in the fall of 2001. WMATA’s comments and our response are located in appendix V. Our work was primarily performed at WMATA headquarters (see app. IV for a detailed description of our scope and methodology.) We conducted our work from September 2000 through June 2001 in accordance with generally accepted government auditing standards. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after the date of this report. At that time, we will send copies of this report to the General Manager, WMATA; the Honorable Norman Y. Mineta, Secretary of Transportation; Hiram J. Walker, Acting Deputy Administrator, Federal Transit Administration; and the Honorable Mitchell E. Daniels, Jr., Director, Office of Management and Budget. We will make copies available to others on request. If you have any questions about this report, please call me at (202) 512- 2834 or Ronald E. Stouffer on (202) 512-4416. GAO contacts and staff acknowledgements are listed in appendix VI. The Washington Metropolitan Area Transit Authority (WMATA) operates and maintains the second largest rail transit system in the United States, as measured by the number of passengers carried per year. In fiscal year 2000 (July 1, 1999, through June 30, 2000), Metrorail carried about 163.3 million passengers. For the 9-month period ending in the third quarter of fiscal year 2001, ridership increased by almost 6 percent compared to the same period in fiscal year 2000. Metrorail’s operations and maintenance activities are extensive, including all activities required to operate and maintain the equipment and entire infrastructure that supports the movement of passengers. The Metrorail system, consisting of 103 miles of track, 83 stations, and 5 separate rail lines, operates 7 days a week, providing 18.5 hours of service each weekday and 18 hours daily on weekends. System maintenance activities include such things as cleaning, scheduled (preventive) maintenance, unscheduled repairs, and some upgrades. These maintenance activities are performed on a broad range of equipment and facilities that includes 762 rail cars; 103 miles of subway, surface, and elevated track; 576 escalators; 180 station elevators; 1,937 fare collection machines; 6 maintenance facilities; and other elements of the system’s infrastructure. Metrorail’s revenues and expenses represent the largest portion of WMATA’s overall operating budget. For example, in fiscal year 2000, Metrorail’s operating expenses accounted for $392.1 million, or 56 percent of WMATA’s overall operating expenses of $704.8 million. Furthermore, Metrorail brings in the largest portion of WMATA’s internally generated operating revenues. In fiscal year 2000, for example, Metrorail’s passenger fares and other revenues accounted for about $292.5 million, or 76 percent of WMATA’s overall internally generated revenues of $384.9 million. As a measure of financial performance, Metrorail’s cost recovery ratio represents one of the highest of any rail transit system in the nation, according to the Federal Transit Administration (FTA). For example, during fiscal years 1996 through 2000, Metrorail recovered, on average, 73 cents for every dollar that WMATA spent to operate and maintain the rail system. Metrorail has experienced vehicle, escalator, elevator, and other system equipment and infrastructure problems over the past several years. Data provided by WMATA show that vehicle, track, system, and other problems have resulted in, among other things, an increasing number of train delays and passenger “offloads.” For example, the number of train delays due to such problems increased from 865 in fiscal year 1996 to 1,417 in fiscal year 2000, or by about 64 percent. At the same time, the number of passenger offloads increased by about 55 percent—from 783 in fiscal year 1996 to 1,212 in fiscal year 2000. WMATA attributes these problems primarily to its aging rail equipment and infrastructure. Forty-five percent of Metrorail’s 103-mile system is from 17 to 25 years old. Another 33 percent is from 9 to 16 years old. Only 22 percent is relatively new—constructed within the past 8 years. Similarly, 39 percent of Metrorail’s 762 rail car fleet has been operating since 1976. Another 48 percent went into service during the 1980s, and only 13 percent is relatively new—placed into service in the mid-1990s. Further, an assessment of the condition of Metrorail’s equipment and infrastructure performed in 1998 found that certain assets, such as stations and tunnels, were in a “degraded” condition, suffering from, among other things, deferred maintenance and postponement of rehabilitation due to funding constraints. The general effect of deferring maintenance is a lowering of overall system quality; an increase in the cost of regular and corrective maintenance; and an increase in the cost of rehabilitation work, when it is finally performed. WMATA is addressing Metrorail’s equipment and infrastructure problems through a number of projects in its Infrastructure Renewal Program (IRP). One key project being carried out under IRP is the Emergency Rail Rehabilitation Program, which is focused on reducing the serious service reliability problems—including problems with rail car equipment, train “wayside relays,” and customer communications—highlighted in the spring of 1999. Now in its second year, this program has a number of goals, including reducing train delays and passenger offloads by 50 percent. To achieve these goals, the program provides for, among other things, accelerated maintenance projects to correct performance problems on the fleet’s oldest rail cars, with all work scheduled to be completed by November 2003. The program also provides for additional maintenance efforts on station escalators and improvements in critical facilities and communication equipment, including additional fare gates and upgraded passenger announcement systems. WMATA has made significant progress in carrying out many of the emergency program’s rail system improvement projects. For example, by August 2000, WMATA had completed almost 8 of 12 car maintenance projects on such critical components as brakes and doors on 662 rail cars. Furthermore, WMATA’s statistics show that for the period covering July 2000 through January 2001, the number of passenger offloads had decreased by 15 percent, compared with the same period in the previous year. In particular, WMATA officials noted that offloads during the spring “Cherry Blossom Season” in the metropolitan Washington, D.C., area decreased, on average, from 9 per weekday in 1999 to 4.8 per weekday in 2001. Other examples of WMATA’s progress under the emergency program include the award of a contract for maintenance and rehabilitation of 170 station escalators; installation of rail system scanners at all station kiosks for status monitoring by station managers, allowing them to respond to passenger inquiries with real-time information on incidents and delays; installation of electronic display signs on station platforms, showing train arrivals and service delays; and installation of 44 additional fare gates. In addition to the emergency rehabilitation program, IRP includes other key projects that address Metrorail’s aging equipment and infrastructure problems. One of these—the rail car rehabilitation project—will enhance the reliability of 364 cars that were built in the 1980s. These cars will be overhauled and rehabilitated under a contract awarded in December 2000. The work, according to WMATA, will greatly reduce the cars’ energy consumption and maintenance costs and improve their overall reliability. WMATA expects to take delivery of the first rehabilitated cars in August 2002. Work on all of the cars is expected to be completed by summer 2005. Another key IRP project addresses the water infiltration problem that has occurred within the rail system’s tunnels and stations. This problem has resulted in the degradation of critical wayside systems and equipment that are housed in the tunnels and stations, including automatic train control, communications, power equipment, cabling, and lighting. WMATA’s leak remediation project will control the infiltration of water while a related project will provide drainage in locations with standing water or extreme water infiltration. As of March 2001, approximately 3,700 leaks had been repaired out of about 4,600 scheduled for repair by the end of June 2001. In addition, WMATA has an ongoing multiyear contract to rehabilitate 14 drainage-pumping stations. By March 2001, the work on one pumping station had been completed and work on two others was beginning. Some of the other IRP projects directed at improving the rail system include the following: Rail car enhancements: This project is designed to improve the accessibility, safety, maintenance, appearance, and reliability of the rail car fleet by retrofitting or replacing certain rail car equipment such as intercar barriers. Station enhancements: This project includes the rehabilitation, replacement, and installation of, among other things, concrete structures, sidewalks, stairwells, stairways, and exterior lighting to maintain the integrity of the stations’ structures, prevent additional deterioration, and provide a safe environment for passengers. Automatic train control (ATC) and power systems rehabilitation: This project addresses the need to rehabilitate the ATC equipment and replace worn-out, obsolete electrical systems with new components that use new technology and save energy. Track and structures rehabilitation: This project is being carried out to control the corrosion and deterioration of track, tunnels, and elevated structures due to the effects of weather and water infiltration, among other things. WMATA also faces operating challenges brought about by ever-increasing ridership. Metrorail is now operating at near capacity during peak demand periods, causing some uncomfortably crowded trains. WMATA has several actions under way to ease Metrorail’s overcrowded conditions, including adding new rail cars to the system, which will allow Metrorail to increase the size of some trains. Metrorail’s current passenger load standards allow for an average of 140 passengers per car on all trains and 155 passengers per car on any single train during peak demand periods. The current Metrorail fleet is composed of two types of cars. One type—the Rohr car—has a full-load capacity of 175, including 80 seated and 95 standing passengers. The other model—the Breda car—can also accommodate 175 passengers, but it has 12 fewer seats. For planning purposes, WMATA considers scheduled capacity—number of trains, cars per train, and intervals between trains— to be meeting ridership demands if the number of passengers in a car during the peak half-hour is, on average, 140 or fewer. An average greater than 140 indicates that demand is exceeding capacity. Demand is also considered exceeding capacity when an individual trip exceeds an average of 155 passengers per car consistently throughout a month. For the purpose of assessing rail service levels during peak demand periods, WMATA defines passenger loads and comfort levels as follows: (1) up to 99 passengers per car as “seated with some standing,” (2) 100 to 124 passengers as “crowded but comfortable,” (3) 125 to 149 passengers as “crowded and uncomfortable,” and (4) 150 or more passengers as “crush load.” In measuring Metrorail’s performance over the 6-month period ending in January 2001, WMATA observed 233 trips during the peak morning hour (7:45 to 8:45); an average of 15 percent of the train cars were uncomfortably crowded, and about 8 percent had crush loads. At the same time, WMATA found that of the 225 trips observed during the peak afternoon hour (5:00 to 6:00), an average of 15 percent of the train cars were uncomfortably crowded, and about 5 percent had crush loads. Metrorail’s overcrowded conditions are primarily the result of three separate but related factors. First, WMATA’s records show that Metrorail ridership has grown by about 10 percent over the past 4 years—from about 148 million passengers in fiscal year 1997 to 163.3 million in fiscal year 2000. According to WMATA, during fiscal year 2000, on average, 558,000 weekday trips were taken on Metrorail, with several months experiencing daily average trips in the 580,000 to 590,000 range. The number of Metrorail trips that occur in the peak periods has grown at an even greater rate. Morning peak period ridership has increased 16 percent from fiscal year 1997 to fiscal year 2000. During the morning and afternoon peak periods, almost 200,000 people, on average, used the Metrorail system in 2000. The second factor contributing to overcrowding is Metrorail’s lack of a sufficient number of rail cars to operate more and longer trains on a regular basis, without creating service and reliability problems. For example, in order to meet higher-than-expected ridership demands on the Green Line’s new Branch Avenue extension, WMATA had to reduce by 6 the number of cars required for its strategic “gap trains” and by 26 the number of cars in its operating spares inventory. Like gap trains, the operating spares also contribute to service reliability. By reducing the number of operating spares and gap trains, WMATA was able to increase the number and size of the trains operating on the Green Line without reducing service on its other four lines. However, in reducing the number of operating spares and gap trains, WMATA recognizes that it also increased the potential for service disruptions due to mechanical problems. Finally, if Metrorail had a sufficient number of vehicles to expand its rush- hour trains from six cars to eight cars, the trains would have more room to accommodate passengers, with the result that the most crowded trains would become more comfortable. Although the Metrorail system was originally designed to accommodate eight-car trains, until recently, WMATA had been uncertain about whether longer trains could stop safely inside stations and whether the system had enough electricity to power longer trains on a regular basis. For example, all Metrorail cars are 75 feet long, and all station platforms measure 600 feet in length. This means that an eight-car train must stop precisely at the end of the platform for passengers to exit and enter the cars safely. To address concerns about whether the rail system can operate and accommodate longer trains on a regular basis, Metrorail began testing eight-car trains on each of its lines in December 2000. The results of these tests, presented to the Operations Committee of the Board of Directors in March 2001, indicate that eight-car trains could be operated in limited service only if additional vehicles—besides those currently on order—are purchased and improvements are made to the power system and automatic train control equipment. Further use of eight-car trains would require an even greater investment in these and other elements of the system, such as maintenance and storage capacity. WMATA is examining Metrorail’s core capacity needs to determine, among other things, what improvements in capacity—cars and power, for example—will be required to operate eight-car trains on a regular basis during peak demand periods.WMATA expects to complete this study in the fall of 2001. WMATA has other actions under way to ease Metrorail’s overcrowded conditions. Most notably, the agency has ordered 192 new rail cars that it had planned to phase into service over the 12-month period beginning in the summer of 2001. However, WMATA suffered a setback in June 2001 when it took action to delay delivery of these cars until the rail car manufacturer corrects technical problems. As of late June 2001, WMATA officials told us that they now expect to begin phasing the first new cars into service by the fall of 2001. The majority of the new cars will be placed into service where the heaviest ridership is occurring and will allow WMATA to adjust train sizes. For example, on some lines, the train size will change from four cars to six cars. WMATA does not anticipate that the additional cars will have a large effect on passenger comfort levels, especially if ridership continues to grow; however, it believes the new cars will reduce the percentage of trips with crush loads. According to WMATA, the new cars were intended to address a 1-percent per year growth in ridership, but Metrorail has been averaging more than that. WMATA has also established goals for improving Metrorail’s passenger load standards and therefore passenger comfort levels. Although no time limit has been established for achieving these goals, they include reducing the primary load standard from 140 to 105 passengers per car on all trains—a 25-percent reduction—and reducing the secondary load standard from 155 to 115 passengers per car on any single train—a 26-percent reduction—during peak demand periods. WMATA’s maintenance and repair shop capacity could be stretched to near maximum levels as early as the fall of 2001 with the expected delivery of the first group of the 192 new rail cars. Furthermore, Metrorail’s repair shop capacity may be exhausted when delivery of the remaining rail cars is completed by the fall of 2002. WMATA is determining whether and how its current shop capacity could be expanded. WMATA’s ability to regularly maintain and repair its rail fleet directly affects the reliability and quality of Metrorail service. Currently, WMATA has 6 facilities with a total capacity to maintain and repair 118 cars daily. These facilities are located throughout the Metrorail system. The oldest and largest shop, opened in 1974, is 1 of 2 facilities able to service more than 20 cars each and perform heavy repairs and overhauls in addition to scheduled maintenance and unscheduled repairs. Of the remaining 4 facilities, 3 have the capacity to service 20 cars each; 1 facility has only 8 repair spaces. WMATA plans to open a new facility in 2002 that will expand its current capacity to accommodate 126 cars. As of March 2001, Metrorail’s total available fleet consisted of 762 cars. Given that WMATA has shop spaces for 118 cars and that some cars can be repaired outside of the shop, repair shop capacity in fiscal year 2000 was sufficient, for planning purposes, to support Metrorail’s maintenance and repair requirements. According to WMATA, the number of shop spaces required for maintenance and repairs equals the number of cars needed for revenue service, plus the number of spare cars (20 percent of the available fleet) needed for fleet management, multiplied by a factor of 15 percent (the typical number of cars held out of revenue service daily for maintenance and repairs). WMATA also considers the fact that about 15 percent of “running repairs”—repairs to address problems that occur while vehicles are in service—can be performed safely outside of the repair shop. WMATA typically holds about 112 rail cars out of service daily for maintenance and repair. However, WMATA officials told us that they expect to receive about 80 of the 192 new rail cars by the end of the fall of 2001, which will increase the available fleet size to 842 cars. Of the 80 new cars, 32 are required for service on the Green Line’s Branch Avenue extension. The remainder will be placed into revenue service where required. Thus, by the end of the fall of 2001, WMATA could need 126 repair shop spaces—15 percent of the 842-car fleet—or 8 more than capacity. Depending on the number of cars that can be repaired outside of the repair shop, shop capacity could be inadequate to meet requirements at that time. Further, because the new cars will require acceptance testing before they are placed into service, WMATA will have to relinquish four shop spaces to the manufacturer. Testing, which could require at least 5 days for each car, will be done at one of the larger facilities, where four shop spaces have been dedicated to the car manufacturer. As the balance of the new cars are delivered—10 cars per month over 11 months following the initial delivery in the fall of 2001—repair shop capacity could become even more of a problem by the fall of 2002. At that time, WMATA will have 126 shop spaces and the number of cars required for revenue service will have increased to 914 (762 existing cars, plus 192 new cars, less the 40 cars scheduled for rehabilitation). Consequently, WMATA could need 136 repair shop spaces—15 percent of the 914-car fleet—or 10 more than capacity. Furthermore, WMATA plans to order a total of at least 94 additional vehicles to accommodate new revenue service on the Largo extension to the Blue Line in Maryland (which is currently under construction), increased demand on the Orange Line in Virginia due to service expansion, and service growth on other existing rail lines. WMATA plans to begin the process for procuring these cars in summer 2001 in order to meet projected passenger demands on the Largo extension by early 2005. Although WMATA officials believe that the agency’s current shop capacity may not be favorable for the expeditious turnaround of vehicles requiring maintenance and repair, they pointed out that they are taking steps to ease the capacity problem. For example, in the near term, WMATA has four “blow down pits”—spaces in its largest repair shops used to clean the underside of a car prior to its scheduled maintenance—that can also be used for maintenance and repair. At the same time, however, WMATA recognizes that it has no capacity to maintain and repair the 94 additional cars. According to WMATA’s 1999 rail fleet management plan, the number of cars requiring scheduled maintenance and unscheduled repairs is expected to rise over the next 5 years. This increase in maintenance and repairs will occur because (1) the newer Breda cars will be nearing their midlife; (2) the renovation of the Rohr cars will be 10 years old and the cars will be nearing the end of their service life; and (3) a total of 286 new rail cars will have been added to the fleet, increasing the fleet size by about 37 percent. WMATA is taking two actions to address the maintenance and repair shop capacity problem. First, WMATA is surveying its existing shops to determine whether their capacity can be expanded. The agency expects to complete the survey in the fall of 2001, possibly beginning expansion efforts as early as 2002. The most likely shop to be expanded first is the smallest one, where the number of shop spaces would be increased from 8 to 20. Second, WMATA plans to build a new repair shop within the Dulles Corridor in Virginia. However, this facility will not be available until about 2010, when the Dulles Corridor rail line extension is expected to be completed. At the direction of Congress, the federal government has delegated responsibility for overseeing WMATA and other transit agencies’ rail safety activities to state agencies. In December 1995, FTA issued a state safety oversight rule for rail fixed guideway systems. However, there are no similar federal rules that govern the safety of transit bus systems. In 2000, FTA initiated a voluntary transit bus safety program to promote a better understanding of state safety regulations and disseminate assistance to the industry. In December 1995, FTA issued a state safety oversight rule (49 C.F.R. Part 659) requiring states to oversee the safety of rail fixed guideway systems.According to FTA, the rule was designed to reduce all incidents that harm passengers and employees, whether these incidents are the result of unintentional occurrences (safety) or intentional acts (security). The state safety oversight rule includes provisions for passenger and employee security in recognition that safety and security risks are interrelated for rail transit passengers and employees. Under the rule, states are to designate an oversight agency (or agencies) to oversee the safety of the rail transit systems operating within its borders.When the rail system operates within only a single state, that entity must be an agency of the state; when it operates in more than one state, the affected states designate a single entity to oversee the system. The state may not designate the rail transit system as the oversight agency. In March 1997, transportation departments from Maryland, Virginia, and the District of Columbia designated the Tri-State Oversight Committee (TOC) as the state oversight agency for WMATA. As required by the rule, TOC developed a system safety program standard, a document that establishes the relationship between the oversight and transit agencies and specifies the procedures that the transit agency must follow. In addition, the oversight agency requires WMATA to develop and implement system safety and security program plans, report accidents and unacceptable hazard conditions, and conduct safety reviews. WMATA has developed both system safety and security plans to comply with the state safety oversight rule. The plans are companion documents, which together act as a blueprint for providing safety and security for WMATA. Under the state safety oversight rule, FTA has the responsibility to monitor and evaluate the states’ compliance with the rule. In the fall of 1998, FTA initiated a State Safety Oversight Audit Program to support monitoring activities for the rule. Under this program, FTA audits determine whether state oversight agencies are carrying out the program and examine ways in which the overall program can be improved. In February 2000, FTA completed an audit of TOC, during which FTA identified six deficiencies and three areas of concern. FTA found, among other things, deficiencies in TOC’s (1) process for reviewing the system safety program standard and program plan, (2) hazardous condition investigations and corrective actions, (3) 3-year safety reviews, and (4) oversight agency reporting and certification to FTA. For example, FTA found that TOC had no formal procedures for approving and tracking corrective actions. The purpose of the corrective action plan management process is to document communication between the rail system and the oversight agency regarding the resolution of identified hazards. In response to this deficiency finding, TOC agreed to review and discuss with WMATA its corrective action plans at regularly scheduled meetings, vote to approve or disapprove those measures, and require that additional measures be included in the action plan. According to an FTA official, the agency is satisfied with TOC’s responses to all of its audit findings. There is no overall federal regulation requiring oversight for transit bus safety. Under authority provided by the Motor Carrier Safety Act of 1984, the Federal Highway Administration (FHWA) has exempted passenger carrier operations that were part of federal, state, or quasi-public operations. FHWA has no authority to perform any safety reviews or inspections of transit bus operations. In 1998, the National Transportation Safety Board (NTSB) reported that there were substantial safety deficiencies in, and little federal or state oversight of, the transit bus industry. According to NTSB, the federal government was spending, at that time, over $6 billion to subsidize the operation of transit agencies, yet the safety oversight of transit bus operations was essentially nonexistent. FTA had a state safety oversight program but, as described previously, it applied only to those agencies conducting rail transit operations. According to NTSB, FTA has traditionally looked either to state regulation, if it exists, or to self-regulation by the transit industry to safeguard the public’s use of transit systems. However, FTA has only a few methods for assessing the safety of transit bus agencies that receive federal funding, including (1) sharing safety information among transit agencies, (2) performing triennial oversight reviews of all transit functions that may include a few safety-related questions, and (3) conducting investigations of safety hazards under 49 U.S.C. 5329. According to NTSB, however, none of these methods provide a comprehensive assessment of transit bus safety throughout the country or a remedy for any of the problems that may exist. Accordingly, the NTSB report recommended that DOT develop and implement an oversight program to assess and ensure the safety of transit bus operations that receive federal funding. In November 2000, FTA’s Office of Safety and Security initiated a bus transit safety program in response to concerns about the potential for catastrophic bus accidents. According to FTA officials, the overall objective of the program is to foster a better understanding of transit bus safety and disseminate technical assistance to the industry. FTA identified several program tasks, including developing a model transit bus safety program. Ultimately, potential models for a national framework will be presented that could provide transit bus safety practice guidance for bus entities. According to FTA, the program is not intended to create a bus oversight program that mirrors the rail fixed guideway safety oversight program; rather, its purpose is to compare and contrast current approaches to bus safety regulation and oversight in the country in hopes of identifying best practices for large and small transit bus systems. According to an FTA safety official, FTA will receive and incorporate comments from industry groups like the American Public Transportation Association (APTA) on program tasks and hopes to have all of the program tasks completed by the summer of 2001. WMATA’s primary mission is to provide safe and reliable public transportation service. Thus, safety considerations encompass all aspects of WMATA’s functions from planning and design to construction and operations. According to WMATA, safety is a major consideration at every stage of all of its rail and bus activities. WMATA addresses safety objectives through its system safety program plan, but it has also responded to outside safety reviews by FTA and others. In addition, the transit agency collects and analyzes safety performance data to determine if safety performance meets established safety objectives. In 1983, WMATA’s Board of Directors approved a system safety policy statement establishing the transit authority’s safety philosophy and objectives. Among other things, the policy statement directed WMATA to develop a comprehensive system safety program plan to eliminate or control safety hazards and reduce accident rates. In response to the Board, WMATA developed a plan that sets forth requirements for identifying, evaluating, and minimizing safety risks through all elements of the Metrorail and Metrobus systems. The plan identifies management and technical safety and fire protection activities performed during all phases of bus and rail operations. It also defines formal requirements for, among other things, (1) the implementation of established safety and fire protection criteria; (2) mechanisms for identifying and assessing safety hazards; and (3) methods for conducting investigations of accidents, incidents, or unsafe acts. WMATA’s current General Manager has delegated specific safety responsibilities to the transit agency’s Chief Safety Officer. The Chief Safety Officer has a staff of 26 people and is responsible for managing system safety, occupational safety and health, accident and incident investigation, fire protection, oversight of construction safety and environmental protection, and for monitoring the system safety program plan. Safety personnel investigate accidents involving fatalities, serious injuries, multiple hospitalizations, major fires, and derailments. WMATA is subject to a variety of oversight reviews and audits by federal agencies and private and regional associations, such as APTA, TOC, and FTA. For example, several serious accidents and incidents occurring in the mid-1990s in WMATA’s subway system raised concerns about safety and led to a 1997 report by FTA. Since then APTA has also conducted safety- related reviews of WMATA’s operations. In September 1997, FTA reported on its review of WMATA’s rail operations and found serious deficiencies in WMATA’s safety-related operating practices. FTA reviewed WMATA following a series of accidents and incidents at WMATA that raised concerns about the transit authority’s commitment to safety as its top priority. For example, in January 1996 a train operator was killed at a station when his train slid on icy tracks into parked railcars. In April of the same year, WMATA disconnected the operating mechanisms for the midcar emergency doors on about 100 rail cars without informing the public. Later that month, two workers were injured when their tools made contact with a live electrical cable that should have been deactivated while tracks were being repaired. In addition, a delayed response to a fire in May 1996 put firefighters and passengers at risk. FTA’s review concluded that WMATA had not kept abreast of the formal disciplines that constitute system safety, such as having a planned approach to system safety program tasks, and had not provided appropriate financial and personnel resources to accomplish tasks. In addition, FTA found that WMATA’s safety efforts had been weakened by frequent changes in the reporting level of the safety department, staff and budget reductions, and a deemphasis of safety awareness in public and corporate communications. For example, WMATA’s safety department had moved several times within the organization, making its work difficult, its priorities uncertain, and its status marginal. Also, from 1992 to 1996, safety department staff was reduced from 17 to 12 positions, and only 8 positions were filled at the time of FTA’s review. Finally, as a result of the safety department’s movement through the organization, it became responsible for other functions, further reducing its ability to meet its safety responsibilities. According to FTA, these limitations were reflected in, among other things, the absence of strong public and employee safety awareness programs. FTA’s report found that these problems existed before the arrival of the current management team in the fall of 1996 and concluded that WMATA had taken important first steps to change the situation. For example, in 1996, WMATA’s new General Manager made safety a priority and selected a new Chief Safety Officer who would report directly to him. The General Manager also directed a review of the transit authority’s safety function and revised its system safety program plan, which now includes detailed protocols for identifying and assessing hazards. According to an FTA safety official, WMATA’s safety program is considered “very good” compared to the safety programs at other transit agencies. Under FTA rules, state oversight agencies must conduct an on-site safety review of the transit agency’s implementation of its system safety program plan at least every 3 years. As WMATA’s state oversight agency, TOC used APTA to conduct a safety review in September 1998. APTA’s audit addressed policies, processes, and procedures as set out in WMATA’s system safety program plan and included a review of supporting documentation, interviews with agency personnel, and field observations. In its subsequent report, APTA found 12 deficiencies in such areas as quality assurance programs, plant maintenance, and engineering and technical support and operations. According to APTA, since issuance of its report, WMATA has developed corrective action plans for each of the deficiencies that demonstrate the transit authority’s commitment to strengthening performance standards and ensuring that the processes are effective and prevalent throughout the agency. APTA also concluded that although it does not comparatively rate transit systems as to how effective they are in managing and implementing their safety programs, WMATA is regarded as one of the industry leaders in transit system safety program development and implementation. More recently, WMATA acted to address problems resulting from a tunnel fire that occurred in April 2000. A power cable shorted out in a tunnel between two subway stations, causing an electrical tunnel fire, extremely lengthy delays in service, and the need to evacuate passengers from a subway train. In response to the incident, WMATA created a safety task force to review its operations control center’s handling of the incident. In addition, WMATA’s General Manager asked APTA to conduct a comprehensive peer review of the transit agency’s emergency procedures for handling tunnel fires. Specifically, the General Manager asked APTA to review WMATA’s general agency policies, procedures, rules and practices; coordination with emergency responders; operations control center policies and practices; and front-line employee response to fires. APTA’s findings and recommendations were, in many ways, consistent with the findings of WMATA’s internal investigation. For instance, APTA and WMATA’s recommendations supported the need for efforts to formalize and strengthen training for operations control center personnel and ensure that emergency procedures are addressed in the training and certification of operations staff. The 2 reviews made 32 recommendations affecting fire safety policy and procedure, related equipment, communications, and training. At the time of our review, WMATA had taken actions to implement 30 of the 32 recommendations. According to WMATA’s Chief Safety Officer, the agency developed a list of corrective actions as a result of the fire, is training its staff to communicate more effectively with fire authorities so everyone understands incidents that are taking place, and plans to open a fire training center to train WMATA employees and local firefighters. According to the Chief Safety Officer, WMATA also started collecting information on fire and smoke incidents in Metrorail and Metrobus operations after the April 2000 tunnel fire. These incidents include cable fires, trash fires, and smoke incidents. Figure 1 shows that 22 of the 47 fire and smoke incidents occurring in the Metrorail system from April 20, 2000, to December 31, 2000, had no impact on service. However, other smoke and fire incidents have caused delays in Metrorail service of as much as 2 hours. WMATA collects and analyzes safety data to determine if safety performance meets established safety objectives. Analysis of this system- specific data can be used to determine trends and patterns in system operation. WMATA reports information, such as injuries, collisions, and derailments occurring in its Metrobus and Metrorail systems, to its Board of Directors and others on a quarterly and annual basis. Table 2 shows the number and injury rates for rail station and rail on-board injuries for fiscal years 1996 through 2000. Rail station injuries include, among other things, elevator and escalator injuries; injuries on platforms, mezzanines, and free areas; and injuries occurring outside stations. Rail on-board injuries occur inside trains due to doors, defective equipment, and boarding or alighting trains. A WMATA safety official told us that most of these injuries were not serious. Table 2 shows that WMATA has experienced low rail station injury rates over the 5-year period—only 0.37 injuries per 1 million passenger miles. However, the absolute number of rail station injuries increased from 366 in fiscal year 1999 to 474 in fiscal year 2000, and the injury rate increased from 0.34 to 0.43 for the same 2 years. WMATA officials attribute this increase primarily to the crowding resulting from increased ridership. WMATA documents show that over 50 percent of all rail station injuries have occurred on escalators. According to WMATA’s Chief Safety Officer, the root causes of the majority of these incidents are human factors, not equipment failure, employee performance, or unsafe conditions. In fiscal years 1999 and 2000, for example, no escalator incidents were caused by electrical or mechanical failure or unsafe conditions. WMATA is promoting escalator safety by conducting public awareness campaigns (e.g., brochures and community outreach) and adding safety devices, such as shut-off switches and glide stops. Table 2 shows that rail on-board injuries and injury rates have also been low over the 5-year period. However, the number of injuries and the injury rate almost doubled between fiscal years 1999 and 2000. WMATA documents show that boarding and alighting trains has accounted for, on average, about 45 percent of all rail on-board injuries during the 5-year period. Our review of WMATA documents also shows that rail collisions and derailments occur infrequently. For example, as shown in table 3, WMATA has experienced 18 rail collisions from fiscal year 1996 through fiscal year 2000, with only 1 occurring in fiscal year 2000. WMATA defines rail collisions as collisions of trains in revenue service with other trains, equipment, or objects on tracks that result in damage to equipment or property. According to a WMATA safety official, none of these collisions involved two trains; rather, most incidents involved a train hitting an object that was on or near train tracks. None resulted in a fatality. In addition, there have been only two train derailments involving trains in revenue service that were transporting passengers during the 5-year period, both occurring in fiscal year 1999. A WMATA safety official said that neither of these incidents resulted in injuries. Table 3 shows rail collisions and derailments occurring during fiscal years 1996 through 2000. Table 4 shows that bus passenger injury and bus collision incident rates have remained stable during fiscal years 1996 through 2000, although both total injuries and collisions increased over the last year. According to WMATA, it suspects increases in bus ridership as well as inexperienced operators driving in increasingly congested traffic areas and on new and extended routes as the cause for increased bus incidents. For example, WMATA recently hired 766 new operators to cover retirements. Nevertheless, WMATA considers more than 60 percent of these incidents to be nonpreventable. WMATA has new and planned programs to reduce bus incidents, such as recognition programs, spot checks, a mentor program, promotional programs, route assessments, and new traffic warning signs to prevent rear-end collisions. During fiscal years 1996 through 2000, there were a total of 21 fatalities in WMATA’s transit system—11 fatalities in the Metrobus and 10 in the Metrorail systems. Of the 11 bus fatalities, 5 involved bus collisions with other vehicles, 4 involved persons being struck by a bus, 1 person died on board a bus during an accident, and 1 person died while alighting a bus. Of the 10 rail fatalities, 4 were suicides, 2 involved escalator entrapment, 2 occurred boarding or alighting trains, 1 was the WMATA employee killed in the 1996 incident mentioned previously, and 1 was a person killed when struck by a train. WMATA’s Metro Transit Police Department is responsible for the system’s transit security—which has been defined as freedom from intentional danger for passengers, employees, and the transit system. The department has jurisdiction and arrest powers on WMATA property throughout the 1,500 square mile transit zone that includes Maryland, Virginia, and the District of Columbia and has an authorized strength of 320 sworn and 103 civilian personnel. According to WMATA, its police department, which is the only nonfederal trijurisdictional police agency in the country, is responsible for law enforcement, revenue protection, and security services. Similar to his emphasis on safety issues, WMATA’s current General Manager has delegated authority to the Chief of Police to plan, direct, coordinate, implement, and evaluate all police and security activities for the transit agency. WMATA has developed a systemwide security program plan, participates in external security reviews, and collects and evaluates crime statistics. To emphasize the importance of system security, WMATA’s Metro Transit Police Department established a set of comprehensive security activities documented in its system security program plan. The plan is designed to maximize the level of security experienced by passengers, employees, and other individuals who come into contact with the transit system and to minimize the cost associated with the intrusion of vandals and others into the system. As noted previously, the system security program plan is a companion document to the system safety program plan. One of the security plan’s objectives is to make the transit system more proactive in preventing and mitigating security problems. Many proactive security measures have been in place since the inception and design of the transit system, including station lighting, recessed walls, closed circuit televisions, and alarm systems. WMATA has also developed strategies to reduce crime and provide a secure environment, including strict enforcement of a “zero tolerance” philosophy on crime, emphasis on prevention rather than a response to crime, and crime prevention training for civilians and WMATA employees. The amount of serious transit-related crime has fallen nationwide over the last few years. Nevertheless, according to FTA, the public’s perception about the lack of security continues to have a significant impact on transit ridership nationwide. To combat this perception, FTA initiated a voluntary transit security audit program in 1996. The goal of the program is to assist transit agencies in achieving the highest potential level of a safe and secure transportation environment by encouraging transit systems to develop, implement, and maintain a transit security system that will protect passengers, employees, vehicles, revenue, and property. The program has four objectives, including (1) providing assistance to transit agencies in developing and initiating system security program plans; (2) evaluating the level of preparedness of each system; (3) sharing best practices used by transit police, security, and operations personnel to enhance security for passengers and employees; and (4) evaluating the quality of security provided by transit systems for passengers, employees, and system facilities. Since the program started, FTA has completed two security audits of WMATA. In April 1997, FTA conducted its first on-site transit security audit of WMATA. At that time, FTA officials stated that they were impressed with efforts taken by WMATA transit police and the operating and maintenance departments to comply with FTA’s security requirements. Furthermore, FTA found that the comprehensive nature of WMATA’s security program demonstrates a high level of attention to passenger and employee security. FTA recommended that the transit police play a greater role in the development of agency operating procedures and training programs. It also recommended the development of a technology plan to address police radio communications, the crime records system, and the use of mobile data terminals for filing police reports. In its October 1997 follow-up audit, FTA stated that it was pleased with WMATA’s efforts to comply with FTA’s previous recommendations and suggestions. In addition, FTA observed exemplary security practices and found that WMATA’s transit police were very committed and well supported by top management. The audit recommended, among other things, that the transit police increase its involvement in developing and distributing procedures for systemwide security-related issues. FTA will conduct further security reviews of WMATA on a regular basis under its security audit program. In everyday practice, WMATA’s transit police and its security management team are faced with the need to allocate and assign available security personnel and other resources to best address crime and incidents and to enhance the public’s perception of the transit system as being a safe environment. WMATA collects and analyzes summary statistics to identify trends in crime, assess performance, and design appropriate countermeasures. WMATA’s reporting system groups crimes into two categories that are similar to, but not entirely consistent with, the Federal Bureau of Investigation’s Uniform Crime Reporting System. Currently, WMATA’s Part I crimes include eight crime categories such as arson, homicide, and robbery. Part II crimes include other “less serious” crimes, such as disorderly conduct, drunkenness, and trespassing. WMATA plans to revise its crime categories by January 2002 to be consistent with the Federal Bureau of Investigation’s reporting system. Part II crimes occur much more frequently than Part I crimes in WMATA’s Metrorail and Metrobus systems. From 1996 through 2000, for example, Part II crimes accounted for 72 percent (13,556 crimes) of the nearly 19,000 total crimes committed in the transit system. Part I crimes accounted for only 28 percent (5,401) of all crimes. Yearly total crime counts for the 5-year period ranged from a high of 4,491 crimes in 1998 to a low of 3,510 in 1996. Table 5 shows Part I and Part II crimes committed in the transit system for the 5-year period. As table 6 shows, Part I crimes are committed more often in the Metrorail system than in the Metrobus system. From 1996 through 2000, for example, Part I crimes were committed, on average, about 7 times per million riders in the rail system. In contrast, Part I crimes occurred less than once per million riders on the bus system. Larceny, motor vehicle theft, and robbery accounted for the majority of all Part I crimes committed in WMATA’s entire transit system. From 1996 through 2000, for example, those 3 crime categories accounted for, on average, 93 percent (5,030 crimes) of all Part I crimes. Of those 3 categories, larceny made up, on average, 51 percent of all Part I crimes. Other Part I crimes, such as burglary, homicide, and rape, occurred rarely. Table 6 shows Part I crimes committed in the transit system from 1996 through 2000. WMATA’s crime statistics show that Part I crimes are committed much more frequently in WMATA’s parking lots than on either its Metrobus or Metrorail systems. Part II crimes, however, have been more evenly distributed between parking lots and the Metrorail system over time. From 1996 through 2000, for example, Part I crimes were committed, on average, 64 percent of the time in parking lots and about 31 percent of the time in the Metrorail system. Over the 5-year period, Part II crimes have been committed, on average, about 54 percent of the time in the Metrorail system and 40 percent of the time in parking lots. To address the problem of parking lot crimes Metro recently increased its undercover patrols of the system’s parking lots. Metrobus has experienced only about 6 percent of all Part I and 6 percent of all Part II crimes for the 5-year period. Table 7 shows crimes committed by location from 1996 through 2000. In a December 1998 report, GAO identified capital decision-making principles and practices used by outstanding state and local governments and private sector organizations. In this report, we describe WMATA’s Capital Improvement Program and compare WMATA’s practices with those of leading public and private organizations. In particular, we assessed the extent to which WMATA (1) integrates its organizational goals into the capital decision-making process through structured strategic planning and needs determination processes, (2) uses an investment approach to evaluate and select capital assets, and (3) maintains budgetary control over its capital investments. WMATA created a Capital Improvement Program (CIP) in November 2000 to consolidate its ongoing and planned capital improvement activities. This program contains three elements to address all aspects of the agency’s capital investments, including (1) system rehabilitation and replacements, (2) system expansion, and (3) system access and capacity. Under CIP, WMATA’s Infrastructure Renewal Program (IRP)—created in March 1999—is designed to rehabilitate or replace WMATA’s existing assets, including rail cars, buses, maintenance facilities, tracks, and other structures and systems. This program currently includes 28 projects that are estimated to cost $9.8 billion over a 25-year period from fiscal years 2001 through 2025. Also under CIP, WMATA has initiated programs to expand the original transit system and enhance passengers’ access to Metrorail. For example, WMATA established what is now known as the System Expansion Program (SEP) by issuing a plan in April 1999 to more closely join bus services, rail services, and highway improvements to maximize the effectiveness and efficiency of the regional transportation network. SEP has three major objectives: (1) to expand fixed guideway services; (2) to selectively add stations and entrances to the existing Metrorail system; and (3) to improve bus service levels and expand service areas. A fourth objective of the April 1999 plan—improving access to and capacity of the Metrorail system—is now called the System Access/Capacity Program, as described below. SEP currently includes four approved and proposed projects to expand various components of the rail system. WMATA has not yet estimated the full lifecycle costs for all four projects. The third element of CIP is the System Access and Capacity Program (SAP), formerly part of the April 1999 Transit Service Expansion Plan. SAP was established as a separate program in November 2000 to provide additional rail cars and buses, parking facilities, and support activities to accommodate ridership growth. It also includes a study to determine the modifications needed to the Metrorail system’s core capacity to sustain current ridership volumes and increased passenger demands resulting from future expansions. According to WMATA’s proposed fiscal year 2002 budget, SAP currently includes 16 projects with a total expected cost of approximately $2.5 billion over the next 25 years. In successful organizations, strategic planning guides the decision-making process for all spending, including capital spending. Strategic planning can be defined as a structured process through which an organization translates a vision and makes fundamental decisions that shape and guide what the organization is and what it does. A strategic plan defines an organization’s long-term goals and objectives and the strategies for achieving those goals and objectives; annual performance plans describe in greater detail the specific processes, technologies, and types of resources, including capital, that are needed to achieve performance goals in a given year. Leading organizations use their strategic planning process to link the expected outcomes of projects, including capital projects, to the organization’s overall strategic goals and objectives. Strategic planning provides the underpinnings for agencies to develop comprehensive and effective plans for all activities, including capital investments. It can also facilitate communication within the agency itself as well as between the agency and its external clients. WMATA has articulated a mission statement for the agency and an “organizational goal” of doubling transit ridership by the year 2025 to maintain the existing transit market share, enhance mobility and accessibility, improve air quality, reduce congestion, and support regional growth and travel demands. WMATA officials have also told us that they are creating a business planning process to address key areas, including (1) ridership retention and growth, (2) customer satisfaction, (3) system quality and safety, (4) service capacity and expansion, and (5) internal capabilities and organizational development. We support WMATA’s efforts in these areas, although they have not yet resulted in plans that include the elements that leading organizations consider essential to the strategic planning process. In particular, WMATA has not developed a long-term strategic plan that defines multiyear goals and objectives for the agency and its strategies for achieving those goals, nor has it developed annual performance plans that explain the specific processes, technologies, and types of resources, including capital, that will be applied during a given year to address the performance goals and objectives. WMATA also does not have a document that links the expected outcomes of all of its capital projects—including IRP, SEP, and SAP projects—to achieving the agency’s strategic goals and objectives. Our 1998 report pointed out that conducting a comprehensive needs assessment of program requirements is an important first step in an organization’s capital decision-making process. A comprehensive needs assessment considers an organization’s overall mission and identifies the resources needed to fulfill both immediate requirements and anticipated future needs on the basis of multiyear goals and objectives that flow from the organization’s mission. Again according to our 1998 report, to begin the needs assessment process, leading organizations assess the extent to which stated goals and objectives are aligned with the organization’s mission. Multiyear goals and objectives outline how the organization intends to fulfill its mission. The goals describe, in general terms, the organization’s policy intent and define its direction; objectives serve to move the organization from broad general goals to specific, quantifiable results and time-based statements of what the organization expects to accomplish. The needs assessment is results- oriented in that it determines what is needed to obtain specific outcomes. The focus placed on results drives the selection of alternative ways to fulfill a program’s requirements. When conducting a needs assessment, leading organizations assess internal and external environments. They examine the organization’s primary role and purpose, the strengths and weaknesses of its current organizational structure, and its current activities and how they are accomplished. They also examine external factors that affect or influence the organization’s operations, such as existing or future mandates and the expectations of its customer groups. Leading organizations also define the period of time a needs assessment should cover and how often it is to be updated. WMATA has performed a comprehensive assessment of capital requirements for infrastructure renewal. The foundation for the current IRP was a needs assessment completed by a contractor (Frederick R. Harris, Inc.) in March 1999 and additional assessments performed by WMATA staff to update and expand the information provided by the Harris report. The overall objectives of the assessments were to (1) develop a comprehensive understanding of the transit system’s assets and their condition, (2) determine what is needed to maintain the overall condition of WMATA’s infrastructure and support transit service enhancements, (3) relate system needs to available funding through a system for prioritizing projects and expenditures, and (4) support the transition of the transit system from a “start-up” to a renewal mode. Through these reviews, WMATA obtained a comprehensive inventory of its capital assets, an assessment of the condition of those assets, and recommendations for proposed projects and estimated costs for addressing the agency’s infrastructure renewal requirements over a 25-year period. By comparing its resource needs information with data on its current asset capabilities, WMATA was able to identify the gaps between what it needed to maintain its current infrastructure in good repair and what resources it had available to address infrastructure needs. To improve system access and capacity, WMATA is in the process of identifying current and needed capabilities to determine any performance gaps between them. WMATA is currently assessing the Metrorail system’s core capacity to determine any modifications needed to accommodate current ridership and increased passenger demand generated from future subway expansions. The core capacity assessment is scheduled to be completed by the fall of 2001. WMATA also developed its April 1999 Transit Service Expansion Plan, which identified overall planned expansion efforts given WMATA’s goal of doubling ridership over the next 25 years. The plan states that some of the proposed projects fall into a time frame of 10 to 25 years, and others fall beyond a 25-year horizon. Although the expansion plan outlines a transit vision for the Washington region and represents a positive first step in outlining expansion needs, it does not meet most of the requirements for a comprehensive needs assessment. For example, the plan identifies three overall goals for the role of public transit in the Washington metropolitan area and contains objectives, or elements, to implement these goals. However, the objectives do not always describe specific, quantifiable results or contain time-based statements of what the organization expects to accomplish. Also, the plan does not explain how the agency assessed needs to arrive at the specific proposed projects in the plan, and it does not outline the purpose and scope of each proposed project. Furthermore, it does not examine external factors that might affect the agency’s ability to carry out the plan—such as the transit agency’s lack of dedicated funding and the uncertainty caused by its dependence on annual funding decisions by numerous state, local, and federal government sources—nor does it specify how and when the plan will be updated. Finally, with regard to considering the expectation of customer groups, a representative of the Transportation Planning Board of the Metropolitan Washington Council of Governments told us that WMATA did not fully coordinate the plan with that group before it was published. Although WMATA has not performed a comprehensive needs assessment for system expansion, it does consider regional transportation needs, costs, and benefits before deciding to support proposed expansion projects. For example, WMATA has established a “Project Development Program” to develop conceptual designs for some of the proposed projects contained in the Transit Service Expansion Plan. The goal of this program is to develop initial planning and engineering information for proposed projects that can lead to a more detailed alternatives analysis. Under this program, WMATA is considering alternative ways of providing transit services within specific corridors; developing “order of magnitude” costs and preliminary ridership estimates; and evaluating potential land use, economic development, and other issues related to specific proposed projects. Leading organizations consider a wide range of alternatives to satisfy their needs, including noncapital alternatives, before choosing to purchase or construct a capital asset or facility. When it is determined that capital is needed, managers also consider repair and renovation of existing assets. For its system expansion program, WMATA has a limited role in identifying and evaluating alternatives before deciding to support specific expansion projects. This limited role stems from WMATA’s relationship to other organizations, including (1) the Transportation Planning Board (TPB) of the Metropolitan Washington Council of Governments and (2) the state and local jurisdictions served by WMATA. WMATA is beginning to explore—with transportation officials in Virginia, Maryland, and the District of Columbia—ways to increase its involvement in identifying and evaluating alternatives before the state and local jurisdictions select expansion projects for detailed planning, development, and implementation. We support WMATA’s efforts in this area and believe that the agency could provide valuable analysis and insights through a more active role in the decision-making process for capital expansion projects. With regard to assessing regional transportation needs and alternatives, TPB plays the key role in determining the overall transportation needs of the Washington region and identifying and evaluating alternatives (including noncapital alternatives) to meet those needs. As the regional forum for transportation planning, TPB prepares plans and programs that the federal government must approve before federal aid transportation funds can flow to the Washington region. TPB develops long- and short- range plans that include alternative transportation modes and methods in the region, including highway projects, WMATA’s bus and rail services, bus services provided by local jurisdictions in the region, ridesharing and telecommuting incentives, bike and pedestrian paths, and pricing strategies to manage transportation demands. WMATA’s General Manager is a member of TPB and provides input on proposed transit projects for infrastructure renewal, system expansion, and system access and capacity for TPB’s approval and inclusion in its long- and short-range plans. TPB has also prepared a draft planning document—required by FTA and the Federal Highway Administration—which includes projects for identifying and evaluating transportation requirements and alternatives in the Washington, D.C., metropolitan area, including transit-related projects. The document contains projects to (1) survey workers about their travel patterns and employer-sponsored commuting programs, (2) measure traffic volumes in local jurisdictions, and (3) examine the potential for new and innovative bus services in the Washington metropolitan area. With regard to identifying and evaluating transit expansion alternatives within specific parts of the metropolitan area known as “corridors,” the state and local jurisdictions served by WMATA have the lead role in performing alternatives analyses and proposing projects for detailed planning and federal funding, as required by FTA. According to WMATA officials, the agency’s decisions about which system expansion projects to support are driven by the state and local jurisdictions that sponsor the project and secure a major segment of the proposed project’s funding. For example, the decision to support the project extending Metrorail’s Blue Line to Largo was largely made by the representatives of Maryland’s Department of Transportation, which sponsored the project, and by the members of WMATA’s Board of Directors who represent Maryland jurisdictions. WMATA has had a limited role in identifying and analyzing the corridor- level alternatives required by FTA. After the state and local jurisdictions select a specific expansion project to pursue, they take the lead in preparing the corridor-level alternatives analysis, with limited technical input, if necessary, from WMATA. These analyses range from a “baseline alternative” that may involve little or no investment to making significant capital investments in constructing or expanding a transit system. FTA requires that the alternatives analysis provide information on the benefits, costs, and impacts of alternative strategies, ultimately leading to the selection of a locally preferred alternative to the community’s mobility needs. The alternatives analysis is considered complete when a locally preferred alternative is selected by local and regional decisionmakers and adopted by the metropolitan planning organization—in this case, TPB in its financially constrained long-range plan. In addition to SEP, we also reviewed the extent to which WMATA considers alternatives on its two other capital improvement programs— IRP and SAP. With regard to IRP, there are limited opportunities for the agency to consider alternative approaches to meeting requirements, given that this program addresses the WMATA assets that are needed to maintain current transit service levels. Nonetheless, WMATA did consider alternatives for IRP in some cases. For example, WMATA has evaluated the relative costs of extending the useful life of its rail cars, buses, and escalators by performing extensive mid-life overhauls versus purchasing new vehicles or equipment at the end of the shorter expected service life. As a result, WMATA decided to perform the overhauls and extend the life of its vehicles and equipment, resulting in expected savings over time. With regard to SAP, because WMATA is in the process of assessing its requirements, it is not yet at the stage of its capital decision-making process where alternative approaches have been fully identified and evaluated. WMATA expects to identify its requirements in this area by the end of 2001. An investment approach builds upon an organization’s assessment of where it should invest its resources for the greatest benefit over the long- term. Establishing a decision-making framework which encourages the appropriate levels of management review and approval is a critical factor in making sound capital investment decisions. These decisions are supported by the proper financial, technical, and risk analyses. Leading organizations not only establish a framework for reviewing and approving capital decisions, they also have defined processes for ranking and selecting projects. Furthermore, they also develop long-term capital plans that are based on the long-range vision for the organization embodied in the strategic plan. WMATA has not established a formal executive-level review group within the agency for making capital decisions, nor does it have formal procedures or a standard decision package for considering the relative merits of various capital projects each year. With regard to IRP, according to WMATA officials, all appropriate mid-level and senior managers at WMATA were involved in deciding which IRP projects should be established after the March 1999 Harris study (and subsequent updates by WMATA staff). Also, a committee of mid-level managers has been formed to review, among other things, the small number of requests for new IRP projects that are generated each year as part of the annual budget process. WMATA officials use briefing slides and other underlying analyses to reach consensus within the agency on IRP issues. In addition, WMATA’s management must obtain approval for IRP-related issues and budgets from its Board of Directors, which has a formal Budget Committee that issues guidance, holds periodic meetings to review IRP and other budget issues, and documents its decisions and their rationale in formal meeting minutes. Although WMATA officials throughout the organization provide input into the IRP decision-making process, a more formal process with standardized procedures and documentation and periodic reviews of all ongoing and proposed IRP projects would provide WMATA with a sound basis for clarifying, justifying, and documenting its capital decisions. It would also provide greater continuity within the organization if key managers move to other positions or leave the agency. In response to our review, WMATA officials told us that they plan to establish a new office within the agency that will provide oversight of all established capital projects, including their program scope, schedules, and costs. We view this as a positive step in increasing WMATA’s control over its ongoing projects, and it could provide the basis for a more formal executive review and approval process that promotes a continual evaluation of the merits of all ongoing and proposed capital projects in WMATA’s Capital Improvement Program. Within the System Expansion Program, WMATA officials told us that they play a relatively small role in proposing, evaluating, and selecting projects. According to WMATA officials, system expansion projects are first identified by local jurisdictions, which are also responsible for securing full up-front funding for their respective projects. These officials informed us that WMATA becomes involved in the projects after they have been identified and funding has been secured by the respective jurisdictions. Although WMATA has established priorities for its system expansion program on the basis of the broad need to serve regional travel patterns and sustain the economic vitality of the region, WMATA has not taken the lead in preparing financial, technical, and risk analyses for alternative expansion projects and reviewing various proposed projects on the basis of such analyses. Leading organizations consider this framework to be a critical factor in making sound capital investment decisions. Given that the state and local jurisdictions take the lead in identifying and deciding on expansion projects, WMATA does not become involved in crucial early decisions on pursuing the most appropriate and efficient ways to expand the system and may therefore be limiting its influence on those decisions. However, WMATA could influence those decisions were it to have a more disciplined decision-making framework resulting in documented support for the alternatives it favors. Once jurisdictions have identified and secured funding for proposed expansion projects, FTA guidelines require detailed documentation justifying the projects and following them to completion. These documents include an environmental impact statement and a long-range funding plan. However, these documents are prepared only after the respective jurisdictions have identified the projects. Established practices in capital decision-making include the preparation of such documents as part of the overall capital review and approval process, before the projects are ranked and funds are committed to the projects themselves. The documents are used as supporting documentation for decision or investment packages to justify capital project requests. WMATA does not currently prepare such decision or investment packages before deciding on system expansion projects. Our 1998 report points out that leading organizations have defined processes for ranking and selecting projects. The selection of projects is based on preestablished criteria and a relative ranking of investment proposals. The organizations determine the right mix of projects by viewing all proposed investments and existing assets as a portfolio. They generally find it beneficial to rank projects because the number of requested projects exceeds available funding. The criteria such organizations use in ranking projects include linkage to strategic objectives, costs, benefits, risks, safety concerns, customer service significance, and political implications. In particular, it is important to clearly identify the risks of proposed projects, assess the potential impact of the risks, and develop risk mitigation strategies. With regard to IRP, WMATA performed a one-time priority ranking of proposed projects on the basis of preestablished criteria as part of the March 1999 study conducted by Frederick Harris, Inc. These criteria included how critical the asset’s function was to delivering safe and reliable service; the level of degradation associated with the asset’s current condition; and other factors, such as the costs and benefits of the reinvestment, the income-producing potential of the asset, and the policy implications of various investments. According to WMATA officials, the agency has not periodically updated or reassessed the priority ranking completed in March 1999 because most of the projects in IRP have remained intact, and their priority does not change from year to year. They further noted that any minor changes required in the program from year to year are incorporated through the annual budget process. Although WMATA officials stated that the priority ranking of IRP projects does not need to be periodically reassessed over the years, leading organizations perform such periodic reassessments to help ensure that the organization is fully considering the relative merits, needs, and risks of all projects in light of changing conditions. With regard to its projects for system expansion, access, and capacity, WMATA has not formally ranked its proposed projects on the basis of established criteria. The jurisdictions that WMATA serves identify future expansion and access projects. In April 1999, WMATA established overall priorities for system expansion projects on the basis of the need to serve regional travel patterns and sustain the regional economy; however, WMATA officials told us that individual proposed expansion projects are not in any priority order. In our view, the criteria used by WMATA are not the types of specific criteria that leading organizations use to rank projects. Leading organizations use such criteria as linkage to organizational strategies, cost savings, market growth, and project risk to rank capital projects. Leading organizations develop long-term capital plans to guide implementation of organizational goals and objectives and help decisionmakers establish priorities over the long term. Although WMATA has prepared some documents that could serve as the starting point for such a plan, it has not developed a formal long-term capital plan that identifies and justifies all of its capital projects, links those projects to long-term strategic goals and objectives, and is periodically updated to reflect changing circumstances. With regard to IRP, the study conducted by Frederick Harris, Inc., in March 1999 contains many of the elements of a capital plan for infrastructure renewal. For example, the study proposed a set of projects after a thorough assessment of requirements. It also prioritized the proposed projects on the basis of established criteria that included how critical the asset’s function was to delivering safe and reliable service and information on the asset’s current condition. The study also estimated the life-cycle costs of carrying out each proposed project over a 20-year period. Although it provides an excellent foundation for capital infrastructure renewal planning, the Harris study does not fully meet the intent of an agency capital plan because it does not contain the ultimate decisions reached on which IRP projects are to be funded. Also, WMATA is not using the proposed project ranking contained in the Harris study as the vehicle for updating its capital decisions on the IRP program annually or biennially, as would be expected with an agency capital plan. Instead, WMATA documents its IRP decisions in a series of briefing slides that it uses to highlight IRP issues and recommendations for the purpose of gaining approval within WMATA and approval from WMATA’s Board of Directors. WMATA has also not developed a long-term capital plan that defines capital asset decisions for the system expansion and access programs. In April 1999, WMATA developed its Transit Service Expansion Plan covering a 25-year horizon. Although this plan represents a positive first step in identifying potential capital projects, it does not define the agency’s capital decision-making process or provide sufficient documentation on any of the proposed projects’ justifications, resource requirements, risks, or priorities. Without such information, WMATA and its external stakeholders cannot make informed choices about managing the agency’s capital resources. Finally, WMATA could benefit from preparing a consolidated long-term capital plan that incorporates all of the projects within its Capital Improvement Program for infrastructure renewal, system expansion, and system access and capacity. We recognize that WMATA’s capital funding sources are earmarked for specific categories of capital projects and cannot be interchanged (e.g., use IRP funding to pay for a system expansion project or vice versa). However, establishing a consolidated capital plan would nonetheless allow the agency to weigh and balance the need to maintain its existing capital assets against the demand for new assets. Officials at leading organizations that GAO studied agreed that good budgeting requires that the full costs of a project be considered when decisions are made to provide resources. Most of those organizations make a commitment to the full cost of a project up front and have developed alternative methods for maintaining budgetary control while allowing flexibility in funding. One strategy they use is to budget for and provide advance funding sufficient to complete a useful segment of a project. Another strategy used by some leading organizations is to use innovative financing techniques that provide new sources of funding or new methods of financial return. WMATA’s originally planned 103-mile Metrorail system was completed with useful segments or, as WMATA refers to them, operable segments. The last project to complete the system was designed to add 13.5 miles of heavy rail line, 9 rail stations, and 110 new heavy rail vehicles and spare parts. The project was broken down into four operable segments for which separate financial agreements were negotiated with FTA. This practice of providing separate funding for segments of Metrorail extensions was begun by WMATA’s predecessor, the National Capital Transportation Agency. According to WMATA officials, funding projects in operable segments has worked well and will continue to be used to expand the Metrorail system. WMATA has used innovative financing techniques to fund its Capital Improvement Program and operations activities. These techniques include obtaining a loan guarantee to fund its program for infrastructure renewal, sponsoring joint development projects with other organizations, establishing a Transit Infrastructure Investment Fund (TIIF), and creating special leasing programs to leverage some of its capital assets. The major innovative financing technique WMATA used has been to seek and receive a Transportation Infrastructure Finance and Innovation Actloan guarantee from the Department of Transportation for $600 million to fund its program for infrastructure renewal. This guarantee allowed WMATA to show that it had funding available and thereby initiate and accelerate its most critical IRP projects. WMATA will soon have to seek a loan to pay for those projects, and that loan will have to be repaid with revenues from the local jurisdictions. Through its Joint Development Program, WMATA seeks partners to foster commercial and residential projects on WMATA-owned or controlled property or on private properties adjacent to Metrorail stations for the purpose of generating revenues for WMATA and the local jurisdictions it serves. WMATA currently has 26 joint development projects earning about $6 million each year. WMATA officials project that annual revenues from these projects will eventually reach $10-15 million as additional projects are completed. WMATA has also engaged in leasing programs that allow it to leverage some of its existing assets to generate additional revenue. For example, WMATA entered into tax-advantaged leases of its 680 rail cars in fiscal year 1999. Under this program, WMATA leased its rail cars to equity investors who obtained a tax benefit that they shared with WMATA. WMATA then simultaneously subleased the rail cars from the investors. WMATA raised $80 million in one-time proceeds from this program and is earning interest on those proceeds, resulting in additional income for the agency. In addition, WMATA has a Fiber Optic Leasing Program through which it leases its excess capacity of fiber optics to corporations, along with the right-of-way for installation of fiber optic cables. WMATA earns about $7 million annually from this program. Also, in August 2000, WMATA revised its ongoing TIIF program to allow the agency to retain income and proceeds from the sale or long-term lease of real estate transactions approved under its Joint Development Program. In August 2000, WMATA’s Board of Directors adopted a resolution addressing, among other matters, the use of funds deposited in TIIF. The first priority is to ensure the complete funding of IRP and the anticipated need for additional buses and rail cars to match ridership growth. The second priority is to promote transit-oriented projects, such as those that increase rail system access and ridership. As of February 2001, TIIF contained about $1.6 million. WMATA has estimated that over the 25-year period from fiscal year 2001 through 2025, it will need $9.8 billion to rehabilitate and replace its existing assets under IRP and $2.5 billion to improve access to and capacity of the existing bus and rail systems under SAP. However, the agency anticipates that it will be able to fund only 88 percent, or $8.6 billion, of the IRP requirements from federal and local funding sources, resulting in a $1.2 billion budgetary shortfall over the 25-year period, or an average annual shortfall of about $50 million. In addition, the agency had obtained no funding commitments as of April 2001 to address its $2.5 billion in estimated SAP needs. WMATA faces a number of uncertainties in obtaining the full level of funding that the agency believes it needs to meet IRP and SAP needs. First, although WMATA’s Board of Directors has approved a long-range vision of funding these programs at an amount “not to exceed” WMATA’s estimated amounts, the Board approves funding for only a 5-year period through an “Interjurisdictional Funding Agreement,” and it firmly commits to funding IRP projects only 1 year at a time through the budget process. WMATA’s current Interjurisdictional Funding Agreement expires in 2003, so local funding beyond that time is uncertain. Furthermore, WMATA’s estimate of SAP requirements could significantly increase when it completes its assessment of Metrorail’s core capacity in the fall of 2001. WMATA also faces the uncertainty regarding federal funding that every other transit agency faces in light of the need for reauthorization of federal legislationgoverning transit funding in 2003. WMATA has not developed any plans for addressing the potential budgetary shortfalls in IRP and SAP, nor has it developed alternate scenarios of how funding reductions would be absorbed by the various asset categories under IRP or by the projects identified under SAP. WMATA officials expressed concerns that such plans and alternate scenarios could undermine their efforts to obtain what they believe is the required funding amount for the two capital programs. In our view, however, prudent management requires that the agency identify the steps needed to address any anticipated shortfalls and develop alternate plans for carrying out its capital activities, depending on the level of funding obtained from local and federal sources. Our overall approach in reviewing WMATA’s capital investment, operations and maintenance, and safety and security activities was to determine (1) how WMATA is organized and what policies, procedures, and practices the agency uses to carry out the activities in each of the three areas; (2) the nature and extent of any problems WMATA faces in each area, the factors that have contributed to those problems, and the actions WMATA is taking to address them; and (3) the role of other organizations in influencing WMATA’s decision-making processes and providing oversight of WMATA actions in the three areas. To perform all of our work, we reviewed pertinent documentation, including laws and regulations, and interviewed knowledgeable officials throughout WMATA to document the agency’s policies, programs, and practices for performing its operations and maintenance, safety and security, and capital investment activities and to obtain views on the challenges the agency faces in each of those areas. We also met with officials from WMATA’s Board of Directors, the Transportation Planning Board of the Metropolitan Washington Council of Governments, FTA, and the American Public Transportation Association to determine their respective roles in influencing WMATA’s decision-making processes and providing oversight of WMATA and to obtain their views on key challenges facing the agency. We conducted our work from September 2000 through June 2001 in accordance with generally accepted government auditing standards. In reviewing Metrorail’s operations and maintenance activities, we interviewed WMATA’s Deputy General Manager of Operations, Chief Operating Officer of Rail Service, and other officials responsible for planning, directing, and assessing Metrorail’s operations. We also met with WMATA officials responsible for Metrorail’s fleet and facilities maintenance activities. We reviewed Metrorail’s fleet management plan and its operating budget, as well as other key documents related to its operating processes and procedures. In addition, we observed several meetings of the budget and operations committees of WMATA’s Board of Directors, in which issues pertaining to the proposed fiscal year 2002 budget and Metrorail’s ongoing and planned operations were addressed. In reviewing WMATA’s safety and security programs, we interviewed key safety and security staff in WMATA and its oversight agencies and reviewed plans and documents provided to us. In doing our work, we relied upon WMATA’s safety and security statistics. We did not attempt to compare the safety or security of WMATA with other transit systems. Currently, FTA’s National Transit Database is the only comprehensive source of domestic safety and security transit data. According to an FTA report issued in May 2000, however, the database is not adequately comprehensive, timely, or accurate to appropriately assess the state of industrywide or agency-level safety and security. FTA is in the process of redesigning its National Transit Database to enhance its reporting of safety and other data on transit agencies. In reviewing WMATA’s capital investment activities, we compared WMATA’s practices to those of leading public and private sector organizations. In doing so, we assessed the extent to which WMATA (1) integrates its organizational goals into the capital decision-making process, (2) uses an investment approach to evaluate and select capital assets, and (3) maintains budgetary control over its capital investments. Our criteria for established best practices was drawn from GAO’s 1998 Executive Guide: Leading Practices in Capital Decision-Making. The following are GAO’s comments on WMATA’s letter dated June 12, 2001. 1. WMATA did not agree with the subpart of our second recommendation that calls for developing alternative capital funding strategies and project outcomes, depending on the availability of funding from federal, state, and local sources. WMATA states that to develop such contingency plans would encourage its funding agencies to reduce WMATA’s resources, thereby becoming a “self-fulfilling prophecy”. We continue to believe, however, that prudent management requires WMATA to plan for budgetary shortfalls that the agency has publicly acknowledged are a major issue in protecting the public’s investment in WMATA’s transit system. We are particularly concerned about the near-term unfunded amounts for WMATA’s System Access and Capacity Program, which could significantly increase when WMATA completes its assessment of Metrorail’s core capacity in the fall of 2001. The TPB has also expressed concerns about the adequacy of WMATA’s capital funding, noting that the funding available from the state and local jurisdictions is less than that requested by WMATA. Therefore, we did not change the report’s recommendation. In addition to the individuals named above, John E. Bagnulo, Christine E. Bonham, Carlos E. Hazera, Michael E. Horton, Susan Michal Smith, Carol A. Ruchala, and Maria J. Santos made key contributions to this report.
In recent years, the Washington Metropolitan Area Transit Authority's (WMATA) public transit system has experienced problems with the safety and reliability of its transit services, including equipment breakdowns, delays in scheduled service, unprecedented crowding on trains, and some accidents and tunnel fires. At the same time, WMATA's ridership is at an all time high and WMATA managers expect the number of passengers to double during the next 25 years. This report reviews (1) the challenges WMATA faces in operating and maintaining its Metrorail system; (2) efforts WMATA has made to establish and monitor safety and security within its transit system; and (3) the extent to which WMATA follows established best practices in planning, selecting, and budgeting for its capital investments. GAO found that WMATA is addressing significant challenges brought about by the agency's aging equipment and infrastructure and its ever-increasing ridership. WMATA has established programs to identify, evaluate, and minimize safety and security risks throughout its rail and bus systems. WMATA has also adopted several best capital practices used by leading public and private sector organizations, but it could benefit by establishing a more formal, disciplined framework for its capital decision-making process. GAO summarized this report in testimony before Congress; see Mass Transit: WMATA Is Addressing Many Challenges, but Capital Planning Could Be Improved, by JayEtta Z. Hecker, Director of Physical Infrastructure Issues, before the Subcommittee on the District of Columbia, House Committee on Government Reform. GAO-01-1161T , Sept. 21 (17 pages).
The methodological literature provides insight into conducting systematic assessments of evidence for health care interventions that change the delivery or structure of care. Furthermore, the literature on organizational change is pertinent to understanding the key factors that can facilitate or impede implementation and replication of such health care interventions. Applied social science research has developed a core set of methodological questions and approaches for assessing the effect of programs or other interventions on a wide range of organized activities. They address two key issues: how best to determine the independent effect of a program or intervention and how best to generalize from the results obtained from one or more studies to broader populations of interest. A number of organizations have developed more specialized guidance for applying these general methodological principles to health care interventions. For example, the Effective Practice and Organisation of Care Group (EPOC) is a component of the Cochrane Collaboration—an international network of individuals who analyze the effect of health care interventions—which focuses on interventions that change the practice of care and the delivery of health care services. EPOC provides guidance to researchers on how best to prepare systematic reviews of such interventions in order to synthesize the information available in multiple studies. AHRQ’s Effective Health Care Program (EHC) and the Grading of Recommendations Assessment, Development and Evaluation (GRADE) working group have developed similar guidance, though both of these efforts focus more on assessing alternative medical treatments rather than alternative approaches for organizing health care services. Research on organizational change has identified certain factors as key contributors to successful implementation of health care interventions for quality improvement. For example, the literature consistently cites leadership support as essential for successful implementation of quality-of-care interventions within health care settings. Further, it describes the role of leaders in promoting the adoption of interventions by their organizations, in winning acceptance among affected staff members for the changes those interventions entail, and in marshalling sufficient resources for the intervention to succeed. In addition, this literature has shown that organizations vary in their attitudes, beliefs, and values, and that this “organizational culture” can either promote or inhibit change. Organizations tend to achieve quality of care improvement more readily if they have a culture with such characteristics as receptiveness to change, placing high value on ensuring the quality of care provided, and prizing innovation with a willingness to take risks. The literature also cites the role of infrastructure factors, such as the sufficiency and appropriateness of staff resources and the adequacy of existing health information technology (health IT) systems, in the successful implementation of quality improvement interventions. Another factor cited in the literature is the availability of previously developed tools and procedures for standardizing health care processes—such as checklists or guidelines—as well as other types of technical assistance that can facilitate the implementation of a given intervention. Additionally, the literature has pointed to financial factors that affect the implementation of interventions for quality improvement, including both the level of financial resources needed to sustain an intervention and the use of financial incentives to promote quality enhancement activities. Financial incentives represent a particular application of financial resources that involve the contractual or other provisions that determine how much health care providers are paid and for what. Such financial incentives affect who benefits from and who pays for the cost of an intervention. This in turn can facilitate or impede the implementation and replication of interventions. About half of the respondents to our questionnaire reported basic information on the effect of their intervention on both quality of care and costs—the two types of data needed to determine whether or to what extent a particular intervention enhanced the value of health care. Overall, the vast majority of our respondents reported at least some information on the observed effect of their intervention on quality of care. Relatively fewer—though still over half—of our respondents reported at least some information on the effect of their intervention on costs. The ability of policymakers to identify interventions that substantially improve quality and reduce costs depends on the availability of basic information on the size of the effect of an intervention on both quality of care and costs. These are the two types of data needed to determine whether or to what extent a particular intervention enhanced the value of health care. Just over half of the respondents to our questionnaire reported such basic information on their interventions. Sixty-four of 127 respondents reported information on both improvements observed in at least one quality measure and a specific amount of cost savings (see table 1). For the remaining interventions, the missing information most often concerned the effect of the intervention on costs. Furthermore, even fewer respondents, 45, reported improvements observed in at least one quality measure and a specific amount of cost savings that accounted for the costs of implementing their intervention—net cost savings. Compared to information on both quality of care and costs, information on the effect of selected interventions on quality of care alone was more frequently reported. The vast majority of respondents to our questionnaire reported at least some information on the observed effect of their intervention on quality of care. Specifically, 114 of 127 respondents reported improvements in one or more measures used to assess the effect of their intervention on various aspects of care quality. Of these, 112 respondents reported a specific magnitude of improvement observed in at least one quality measure in terms of a percentage change or other quantitative measurement. Additionally, 2 respondents reported improvement in at least one quality measure, but did not report a specific magnitude of improvement. In contrast, the remaining 13 respondents did not report sufficient information to determine whether their intervention had any effect on quality of care. Six of 127 respondents described one or more measures used to assess the effect of their intervention on different aspects of care quality, but did not report a magnitude of improvement observed in these measures. Seven respondents did not report any information on the measures used to assess the effect of their intervention on aspects of care quality. Respondents reported that the effect of their intervention on quality of care was assessed using a range of measures that generally fell into five broad types reflecting different aspects of care quality (see table 2). Respondents most frequently described one or more quality measures that were used to assess the effect of their intervention on outcomes resulting from care. Specifically, 82 respondents reported that the effect of their intervention on quality of care was assessed using outcome measures such as patient mortality, the overall physical and emotional health of a patient, or the level of stress reported by a patient caregiver. In addition, 56 respondents described one or more measures that assessed the effect of their intervention on the amount of health care services consumed. These measures included the length of hospital stay, the number of emergency department visits, and the number of hospital readmissions for a specified population. Forty- four respondents described measures that assessed the effect of their intervention on processes of care. Process-of-care measures assess the extent to which the care provided to a patient was appropriate based on current professional knowledge and the particular circumstances. For example, process-of-care measures could examine whether diabetes patients had received foot exams, eye exams, and regular glucose monitoring at specified intervals. Fewer respondents described measures that assessed quality in terms of the experience of a patient or caregiver or the structure of care. Although the information provided by any one type of quality measure is limited, most of our respondents reported that the effect of their intervention on quality of care was assessed using more than one type of quality measure. Each type of quality measure offers insight into a particular domain of quality such as outcomes of care, processes of care, or experience of care. Just 41 respondents reported that only one type of measure was used to assess the effect of their intervention on quality of care (see fig. 1). For most—79—of the interventions in our review, respondents reported that the effect of their intervention on quality of care was assessed using measures belonging to two or more different types of quality measures, thereby providing a broader perspective on the effect of the intervention on quality of care. Somewhat fewer respondents to our questionnaire reported information on the effect of their interventions on costs than quality of care. Specifically, 72 of 127 respondents reported a specific amount of change in costs—cost savings. Respondents most frequently reported that costs were assessed by calculating the total dollars saved or the average dollars saved per person annually. Respondents less frequently reported that costs were assessed by calculating the financial return on investment, percentage change in total health care costs per patient, or an alternative cost metric such as dollars saved per member per month for patients participating in a certain health care plan. In contrast, the remaining 55 respondents did not report sufficient information to determine whether their intervention had any effect on costs. Nine of 127 respondents reported that costs were assessed, but did not report a specific amount of cost savings. Forty-five respondents reported that cost savings were not assessed, and one respondent did not report any information on whether cost savings were assessed. Most, but not all, of the respondents who reported a specific amount of cost savings stated that these cost savings accounted for the costs associated with implementing the intervention. Among the 72 respondents who reported a specific amount of cost savings, 51 respondents reported net cost savings that took account of implementation costs; another 20 respondents reported gross cost savings that did not take implementation costs into account. When asked to provide additional detail on their implementation cost calculations, 35 respondents reported that the cost savings took account of both start-up costs associated with developing and initially implementing the intervention as well as ongoing costs associated with operating and maintaining the intervention over time. Two respondents reported that cost savings took account of start-up costs but not ongoing costs to maintain the intervention, and 19 reported that cost savings took account of ongoing costs but not start-up costs. The interventions we reviewed also varied in the extent to which the reported cost savings attributed to them were based on information directly related to the intervention. Forty-nine respondents reported that cost savings were calculated using only data that were collected specifically to assess the effect of their intervention on costs. In contrast, 26 respondents reported that cost savings were calculated using a mix of data that were collected specifically to assess the intervention and data from a secondary source such as published literature or a national database. For example, one respondent reported cost savings attributable to an intervention designed to improve patient self-management of asthma based on data that were collected on changes over time in the actual number of health care encounters for patients enrolled in the program and the estimated costs for those encounters derived from national averages for several types of health care services such as hospital days or emergency department visits. While data from secondary data sources may provide otherwise missing information needed to estimate the cost savings achieved by an intervention, the relevance of such secondary data to that particular intervention may be open to question, which makes the accuracy of the cost savings estimate more uncertain. Policymakers and others can assess the strength and limitations of available evidence from studies on the effect of health care interventions on quality of care and costs along three dimensions. One, the credibility of evidence on the effect of health care interventions on quality of care and costs depends primarily on whether those studies apply rigorous study designs. Two, the applicability of the results of studies to a broader population depends on the extent to which the study population is representative of that larger population. Finally, the capacity of health care interventions for widespread replication can be examined in terms of the consistency of results obtained by each intervention across diverse organizations. Appendix III provides a more detailed explanation of what makes some study-design types more rigorous than others and appendix IV presents a list of key questions that describe the information that policymakers can look for to assess the evidence provided by particular studies along these three dimensions. For policymakers and others, the benefit obtained from basic information on the effect of interventions on quality of care and costs depends in large part on the strength of that evidence. Information based on weak evidence can provide policymakers a misleading indication of an intervention’s potential to enhance value. For example, the direction and magnitude of the changes in quality of care and cost reported for the 127 interventions examined through our questionnaire could deviate substantially from the actual impact of those interventions, depending on the characteristics of the studies that generated that reported information. To determine what information has the kind of evidentiary support that they can rely on, policymakers can assess the strengths and limitations of studies that examine health care interventions of interest along three broad dimensions. The first of these dimensions is the credibility of evidence that attributes any changes in quality of care and costs to those interventions. The methodological experts we consulted uniformly emphasized the primacy of study design in determining the credibility of evidence on the effect of health care interventions on quality of care and costs. Observed changes in quality of care and costs that one might attribute to a health care intervention may in fact be due in large part to the effect of a wide variety of other factors. The choice of study design type is critical because rigorous designs have the capacity to isolate the effects of a health care intervention from other factors that may affect changes in quality of care and costs. The methodological literature we reviewed identifies several different study design types that have sufficient rigor to isolate the effect of interventions on quality of care and costs. They include randomized controlled trials (RCTs), interrupted time series studies, and controlled before and after studies. RCTs and controlled before and after studies both use control groups—consisting of study participants who are not exposed to the intervention—to adjust for the effect of other factors besides the intervention. Interrupted times series studies do not use control groups; instead they rely on analyzing data collected at multiple time points both before and after an intervention is implemented to adjust for other factors. (See app. III for more information on how these study design types isolate the effect of an intervention.) In contrast, according to the methodological literature we reviewed, some other types of study designs lack the capacity to isolate the effect of a health care intervention from that of other factors. For example, a simple pre/post study that assesses quality of care and costs once before an intervention is implemented and a second time after implementation of the intervention has no mechanism analogous to a control group to take account of the effect of other factors. The same is true for post-only studies that rely entirely on data collected after an intervention was implemented. With studies using these types of designs, there is no way to determine how much of the difference observed between the pre and post measurements, or among any groups following an intervention, was due to the intervention and not to other factors. Consequently, such studies will not provide policymakers credible information about the extent to which the intervention itself affected both quality of care and costs. Table 3 describes key distinguishing characteristics to help policymakers identify the type of study design employed in a study of an intervention. Among studies addressing the effect of health care interventions on quality of care and costs, a range of rigorous to weak design types are used. For example, among the 127 interventions for which we received responses to our questionnaire, we found 22 interventions with studies involving RCTs and another 11 interventions assessed using controlled before and after studies. However, for a substantially larger number of the 127 interventions, the studies we identified employed the types of study designs that do not isolate the effect of the intervention from other factors. Specifically, the results for 67 interventions were based on pre/post studies, and another 19 were based on post-only studies of one kind or another. In this one, diverse set of interventions that we reviewed, policymakers could find credible evidence based on rigorous study designs concerning the effects of certain interventions on quality of care and costs; however, for many other interventions such studies were lacking. In addition to study design, the methodological literature we reviewed emphasized the importance of how a study is conducted. Even rigorous study designs can lose their capacity to isolate the effect of an intervention on quality of care and costs if researchers do not adhere to the requirements of those designs. Thus, assessments of the strengths of study results should consider how well the study design was implemented. One component of a study’s implementation that policymakers can examine involves the selection and management of control groups used in the study. In order to isolate the effects of an intervention, the control group has to be equivalent to the treatment group— except for the latter’s exposure to the intervention. According to the methodological literature we reviewed, that equivalence can be compromised in a number of ways. In the case of RCTs, for example, allocation to treatment and control groups may not be truly random if there are flaws in the process for assigning study subjects to those groups. Moreover, for both RCTs and controlled before and after studies, losing a disproportionate number of study participants from either treatment or control groups can also undermine their equivalence. Another component of a study’s implementation that policymakers can examine concerns the measures and procedures adopted for data collection. According to the methodological literature we reviewed, a study will produce stronger evidence when it employs measures that are recognized as valid and reliable. For example, central line-associated bloodstream infections can be tracked using a surveillance measure developed by the Centers for Disease Control (CDC) or with less labor-intensive measures that draw on administrative data. Clinicians consider the CDC measure to be the most valid and reliable measure for this type of infection because it calls for laboratory confirmation of identified infections and it accounts for varying risks of infection based on the number of days that a central line catheter is in place. In addition, the data for those measures should be collected at the same time and in the same way from all groups in the study. Any systematic inconsistencies in how data are collected for a study can skew the results. If a study produces credible evidence that a health care intervention has a positive effect on both quality of care and costs within the population it examined, a second dimension that policymakers and others can assess concerns the scope of that effect—for what broader populations or groups are the results applicable? Applicability depends on the representativeness of the study population for a broader population of interest. The methodological literature identifies two different approaches for establishing representativeness: (1) randomly selecting the study population from a known universe, or (2) examining the degree to which a study population matches a given broader population on characteristics relevant to the intervention. The first approach, random selection, intrinsically makes the study population representative of the particular universe from which it was selected and the study results applicable to that population. The second approach for establishing representativeness— examining the extent of similarity between the study population and a broader population of interest—can be used by policymakers whenever the study population was not chosen randomly or the broader population of interest to policymakers is not the universe from which the study population was selected. Policymakers can assess the degree of similarity between the study population and a broader population through an examination that focuses on two issues: (1) identifying characteristics where the study population and broader population of interest differ and (2) assessing whether any differences found could influence the effect of the intervention on quality of care and costs (see app. IV ). Major differences between a nonrandomly selected study population and a broader population of interest to policymakers should raise questions about the applicability of the study’s results for that broader population. For example, an intervention to improve care coordination for patients with diabetes might be implemented and assessed in a few academic medical centers. In that situation, the representativeness of the study population for all patients with diabetes could come into question on at least two counts—the kind of care provided in an academic medical center might well differ from that usually provided by community-based providers and the patients treated by academic medical centers might have a higher level of severity than diabetics treated elsewhere. If patients in the study received a different overall set of services, that could affect the impact of the intervention on those patients even if the intervention itself were implemented the same way for the two populations. Similarly, an intervention could have a more pronounced effect on patients with a higher level of severity, or the intervention might work less well for such patients. Thus, to establish the applicability of the study results to a broader population of diabetic patients, studies of the intervention would need to provide evidence that the differences between the study population and the broader population of diabetics would not affect the performance of the intervention. A third dimension on which policymakers and others can assess the strength of evidence for health care interventions concerns the capacity of an intervention for replication across diverse organizations. Because organizations vary across the factors that affect the implementation of health care interventions, including leadership, organizational culture, and staff and financial resources, a particular intervention may work more or less well depending on the organizational environment in which it operates. As a result, some organizations may be more receptive to a particular value- enhancing intervention than others. That, in turn, can make it more difficult to take an intervention that proved successful in a small number of organizations and replicate it widely across many others. However, some interventions have produced positive results on quality of care and costs in a range of different organizations, which suggests that they may be less sensitive to varying circumstances across organizations. According to the methodological literature and experts that we consulted, certain information can provide the basis for an assessment of the consistency in an intervention’s effects on quality of care and costs in different organizations. Specifically, this information concerns the number of different organizations where the intervention has been implemented, the degree of diversity exhibited by those organizations, and the consistency in observed changes in quality of care and costs across those organizations. However, such information would not be available for assessing the consistency of results across diverse organizations if an intervention has been implemented in only a few different organizations, or in multiple organizations that are generally quite similar. That is also the case if studies only analyze and report changes in quality of care and costs attributed to an intervention in the aggregate, rather than separately for the different organizations that implemented it. On the other hand, for interventions that have been implemented in multiple, diverse organizations, and their results analyzed separately at the different organizations, it is possible for policymakers to compare the results of the intervention across those organizations to examine the consistency of the intervention’s effect. To the extent that those interventions consistently produce positive effects on quality of care and costs among diverse organizations, that provides evidence of their capacity for widespread replication. For other interventions, if data on the changes in quality of care and costs across the different organizations indicate a lack of consistency in outcomes, that provides evidence of a more restricted capacity for replication. Respondents to our questionnaire reported, generally by large margins, that leadership support as well as other factors, such as organizational culture and staff resources, significantly facilitated implementation. However, respondents were more divided when asked about the reported effect that health IT had on implementation, and most respondents reported that financial incentives were not a factor in the implementation of their intervention. A majority of respondents reported that each of these factors, with the exception of financial incentives, was expected to be either very or somewhat important if one were to attempt to replicate their intervention as widely as possible. Taking account of factors that prior research has shown tend to facilitate or impede the implementation and replication of interventions may enhance efforts by policymakers and others to promote the adoption of interventions across varied organizational contexts. In examining the relative impact of seven factors identified in our literature review, we found that respondents to our questionnaire reported, generally by large margins, that five of the seven factors significantly facilitated implementation of their intervention. Health IT and financial incentives were the exceptions. Leadership support was the factor that the largest number of respondents reported as having significantly facilitated implementation of their intervention (see table 4). When asked to describe how leadership support facilitated implementation, respondents frequently explained that a leader who visibly prioritized and endorsed the intervention, allocated necessary resources, and championed the development and implementation of the intervention and drove necessary organizational or behavioral changes facilitated the implementation of the intervention. Respondents also explained that having champions, specifically clinicians, was a key factor in encouraging cooperation and participation in the intervention by staff, especially fellow clinicians. The prominent role attributed to leadership in implementing the many different types of interventions in our sample suggests that policymakers will have greater success in implementing and replicating interventions to the extent that they can take steps to ensure that strong leadership is in place before interventions are initiated. Respondents typically reported that a combination of additional factors along with leadership support significantly facilitated implementation of their intervention. The 92 respondents who reported leadership support as having significantly facilitated implementation, reported, on average, another three factors as having significantly facilitated implementation. Of the 86 respondents who reported at least one factor in addition to leadership support as significantly facilitating implementation, more than half reported staff resources (60), organizational culture (55), and the availability of other tools (50), respectively, as having significantly facilitated implementation. Nearly half (42) reported that financial resources, in addition to leadership, significantly facilitated implementation. Just six respondents reported leadership support and no other factor as having significantly facilitated implementation. In contrast to the five factors that a clear majority of respondents reported having facilitated implementation of their intervention, respondents were more divided on how health IT affected implementation, as shown in table 4. Health IT had the highest number of respondents, compared to the other factors, that reported the factor impeded implementation of their intervention. Further, a substantial group of respondents reported that health IT was not a factor. On the other hand, nearly half of respondents reported that health IT either significantly or somewhat facilitated implementation of their intervention. Respondents frequently explained that health IT facilitated implementation of their intervention by enhancing the exchange of information and communication across providers or organizations, facilitating the collection of data or the evaluation of the intervention and improving the efficiency and productivity of staff. Of those who reported that health IT impeded implementation, respondents commonly cited the limited functional capacity of existing systems or the lack of interoperability across settings as impediments to successful implementation. Other respondents explained that the general lack of health IT altogether acted as a barrier that impeded implementation. Variation in the role of health IT across different types of interventions does not appear to explain the mixed assessment of this factor; as respondents for each of the intervention types included in our sample—with two exceptions— were similarly divided on how health IT affected implementation. However, proportionately more respondents for care coordination or transitions of care interventions as well as care-process- improvement interventions reported health IT as having facilitated implementation compared to respondents for other types of interventions. This result suggests that as policymakers consider different health care interventions, implementation of some of their options will depend more heavily than others on having appropriately configured health IT in place. Financial incentives were most often reported as not a factor. Slightly more than half of our 127 respondents reported financial incentives—as distinct from the related, but broader, financial resources factor—as having not been a factor in implementation of their intervention. The exception was for the two types of interventions for which financial incentives were an integral component—provider payment restructuring and insurance redesign—where respondents most often reported financial incentives as having significantly facilitated implementation. When asked to explain how financial incentives facilitated or impeded implementation most respondents simply provided a description of the incentives they used to implement the intervention, such as payments to providers or patients to participate in the intervention. However, a few respondents explained that the expected cost savings generated from the intervention was an indirect incentive to implement the intervention while other respondents explained that incentives within existing payment systems, or the lack thereof, affected implementation. While the implementation of many interventions included in our sample may not have been affected by financial incentives, current means of paying for health care, such as fee-for-service payment structures, may have hindered the successful implementation of other interventions. Much as they had reported regarding the implementation of their intervention, nearly all respondents consistently expected that leadership support would be very important if one were to attempt to replicate their intervention as widely as possible (see table 5). Leadership support was reported nearly unanimously by respondents as being very important for widespread replication of their intervention, paralleling respondents’ relatively consistent assessment of the effect of leadership on implementation. In addition, a clear majority of respondents expected that each of the other factors—except for financial incentives—would be either very or somewhat important for replication. In contrast to the highly divided views health IT evoked from respondents regarding its role in the implementation of their interventions, it was reported by a substantial majority of respondents as either very (48) or somewhat (48) important for widespread replication. This could be an indication that, if health-IT- related impediments experienced when implementing the intervention, such as the lack of interoperability across settings, were ameliorated, health IT could be important to the successful replication of some interventions. Similar to views expressed about the implementation of care coordination or transitions of care interventions, respondents for these types of interventions commonly reported that health IT would be very important for widespread replication more so than respondents for other types of interventions. Financial incentives was the factor that drew the most mixed assessment from respondents with regards to its expected importance for the widespread replication of interventions. Nearly half of respondents indicated that financial incentives were not important for widespread replication, which is similar to the view of most respondents regarding the role of such incentives in the implementation of their interventions. Another substantial group of respondents (30) indicated that financial incentives would be very important for replication. When respondents were asked to explain why factors would be important for widespread replication, respondents discussed financial factors more frequently than any other factor. Respondents’ explanations about these financial factors often concerned a misalignment of financial incentives within existing payment systems that limited the attractiveness of replicating interventions that seek to enhance value. For example, some respondents noted that it would be difficult to replicate interventions that involved providing additional services, such as care coordination, under existing payment systems that typically do not compensate providers for those services. Our work suggests that progress in achieving greater value in health care in the U.S. will depend, in part, on the availability of information regarding the effect of different interventions on quality of care and costs and on how policymakers and others assess and use that information. Such information can guide the choices of policymakers among multiple interventions vying for support, but those decisions will have a sounder basis if the information meets certain criteria regarding its content and strength of evidence. With respect to content, information on the magnitude of an intervention’s effect on both quality of care and costs is needed to determine if an intervention has enhanced value. In the case of the responses to our questionnaire on 127 diverse interventions, we found that this basic level of information was reported as available about half the time. With respect to the strength of evidence, the most critical indication comes from the types of study designs used to produce that information. There are a range of rigorous study designs which can provide credible support for the attribution of observed changes in quality of care and costs to a particular intervention. Our review of studies associated with the 127 interventions examined by our questionnaire found that while a number of studies employed rigorous study designs, a substantially larger number employed weaker designs that could not isolate the effect of an intervention from other factors. To the extent that policymakers find and use information on health care interventions that provides sufficient credible evidence on the effects of those interventions on both quality of care and costs, they will be better equipped to determine which interventions produce greater value in health care. Our work also suggests that successful efforts to encourage the widespread adoption of value-enhancing interventions will need to take into account a complex mix of factors, including leadership support, organizational culture, and staff resources, that facilitate the implementation of health care interventions across a wide range of organizational contexts. We requested comments from the Department of Health and Human Services, but none were provided. We are sending copies of this report to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To examine the availability of information on the effect of selected health care interventions on quality of care and costs as well as factors that can facilitate the implementation and replication of these interventions, we studied a diverse set of specific interventions that seek to enhance the value of health care through making changes in the way care is delivered. Specifically, these interventions make changes in who delivers health care services, how care is organized, or where care is delivered for a specified population. To identify interventions for our study, we drew upon six distinct sources to select a broad and diverse, though not exhaustive, set of interventions that have been implemented in one or more locations in the U.S. or abroad. These sources allowed us to identify a wide range of value-enhancing strategies implemented in different of health care settings, such as hospitals, integrated delivery systems, and physician practices, over more than a 10-year time span, including interventions that have not been described in academic or professional literature. We identified 828 interventions of potential relevance to our study through the following six sources:  A review of relevant literature on health care interventions that make changes in who delivers care services, how it is delivered, or where it is delivered. We conducted several searches using online databases, including Medline and ProQuest Health, to identify articles on interventions that were published from 1999 to 2009.  A review of interventions contained in the Agency for Healthcare Research and Quality’s (AHRQ) Health Care Innovations Exchange (HCIE) as of August 20, 2009. The HCIE is a Web site that acts as a repository for information on quality improvement interventions and other innovative strategies to improve health care submitted by their implementers. Many of the interventions contained in the HCIE make changes in the way health care is delivered and include information on cost as well as quality.  A review of the relevant articles contained in the Tufts Medical Center’s Cost-Effectiveness Analysis (CEA) Registry that were published from 1999 to 2009. The CEA Registry is a comprehensive database of health care cost-utility analyses that examine the health benefits and costs of strategies to improve health care. The CEA Registry contains articles from 45 peer- reviewed publications. Interviews with experts on health care interventions associated with organizations such as state governments, integrated delivery systems, employer groups, and other countries. Information on interventions that we identified from press reports, select journal articles published after 2009, and presentations at conferences. Information on interventions submitted by their innovators or evaluators to either the Senate Budget Committee or GAO. To select interventions for inclusion in our study, we reviewed source documents for each of the potentially relevant interventions that we identified through our six sources. We selected 239 interventions that met the following seven criteria:  The intervention made a discrete change in who delivers health care services, how care is organized, or where care is delivered.  The intervention targeted a population or problem that was relevant to the U.S. health care system.  The intervention may have included health information technology (health IT) as one of its components of change, but health IT was not the intervention’s only component of change.  The primary goal of the intervention was not focused on increasing access to care.  The intervention activities must fall within the health care system.  The source document or documents for the intervention either contained information or indicated that information is available on the effect of the intervention on quality of care and its effect on costs. Moreover, the source documents indicated that the intervention enhanced the value of health care by meeting one of the following three conditions: (1) increases quality of care and reduces costs; (2) maintains quality of care and reduces costs; or (3) increases quality of care and maintains costs.  The intervention was implemented in at least one health care setting. Interventions that were studied by examining their potential costs and benefits based on simulated outcomes rather than analyzing data from their actual implementation were excluded. To collect information on the 239 health care interventions that we selected for our study, we developed a Web-based questionnaire that contained 22 open- and closed-ended questions on interventions, their effect on quality of care and costs, and factors that may affect their implementation and replication. We sent our questionnaire to 235 individuals who participated in developing, implementing, or evaluating each intervention. We identified these individuals through the source documents that we used to select interventions for our study. We received usable responses— responses that contained relevant information on the effect of the intervention on quality of care, the effect of the intervention on costs, or key factors that may affect implementation—for 127 interventions. We developed protocols for cleaning and analyzing data that we received from questionnaire respondents. These protocols included: identifying usable responses; reviewing source documents to clarify responses; and, if necessary, contacting respondents directly to obtain additional information on their intervention. To determine the availability of information on the effect of selected health care interventions on quality of care, we analyzed data that we collected from respondents through our questionnaire. We asked respondents to describe up to five key measures used to assess the effect of their intervention on quality of care and the magnitude—a percentage change or other quantitative assessment—of change observed in each measure described relative to a control group that did not experience the intervention or a baseline assessment made prior to implementing the intervention. We conducted a content analysis on questionnaire responses to determine the number of respondents who described one or more key measures used to assess the effect of their intervention on quality of care and the number of respondents who reported improvements in those measures attributable to their intervention. As part of our analysis on the availability of information on the effect of selected health care interventions on quality of care, we examined the types of quality measures respondents reported. We conducted a content analysis on questionnaire responses to determine what aspect of care quality—such as patient mortality, hospital readmissions, or patient satisfaction with care—each measure examined. We categorized each measure that respondents described by type based on the aspect of care quality it examined; for example, we categorized a measure that assessed the effect of an intervention on patient mortality as an outcome measure. We categorized quality measures into types that are largely based on the measure domains laid out by AHRQ in its National Quality Measure Clearing House. We did not include all measure domains laid out by AHRQ in our analysis, because some domains, such as access to care, fell outside of the scope of our engagement. Moreover, measures that did not clearly specify which aspect of care quality was assessed were categorized as unspecified measures. We analyzed this information to determine the types of quality measures used to assess the effect of each intervention on quality of care. To determine the availability of information on the effect of selected health care interventions on costs, we analyzed data that we collected from respondents through our questionnaire. We asked respondents to report the type of cost savings, such as total dollars saved or dollars saved per patient, calculated to assess the effect of their intervention on costs and the specific amount saved for type of cost savings calculated. We also asked respondents if their reported savings accounted for costs associated with implementing the intervention and what information was used to calculate those savings. We determined the number of respondents who reported calculating each type of savings and a specific amount saved for those savings. Furthermore, we analyzed responses by finding the number of respondents who reported accounting for costs associated with implementing the intervention—net cost savings— and the type of information used to calculate those savings. Additionally, we used this information along with information we obtained through our analysis of quality measures to determine (1) the number of respondents who reported a magnitude of improvement in quality measures and a specific amount saved attributable to their intervention and (2) the number of respondents who also reported net cost savings rather than gross cost savings. To identify key criteria that can be used to assess the strength of available evidence on the capacity of interventions to enhance the value of health care we interviewed methodological experts and conducted a literature review to identify relevant systems for assessing the strength of evidence. We reviewed methodological literature published by entities that have well-established systems for evaluating health care interventions, including the Cochrane Collaboration; AHRQ’s Effective Health Care Program, which includes the Evidence-based Practice Centers; and in the United Kingdom, the National Institute for Health and Clinical Excellence and the Centre for Reviews and Dissemination. We focused on those entities with systems for evaluating organizational interventions that change the structure or delivery of health care. This led us to pay particular attention to the guidance developed by Cochrane’s Effective Practice and Organisation of Care (EPOC) Group, a collaborative review group that specializes in conducting systematic reviews of organizational interventions. Our review of this methodological literature and guidance together with our expert interviews led us to develop a set of questions to help decision makers and policy analysts who support them to critically examine the strengths and limitations of evidence about health care interventions that seek to enhance value. These questions target three broad areas: (1) assessing the true effect of the intervention on quality of care and costs, (2) assessing the scope of study results, and (3) assessing an intervention’s capacity for replication. We submitted our initial draft questions to several different experts in assessing the comparative effectiveness of health care interventions and received their feedback on the content and clarity of those questions. Based on that feedback, we made revisions, resulting in the criteria described in our report and the set of questions listed in appendix IV. As part of our efforts to identify key criteria for assessing the strengths and limitations of available evidence on the capacity of interventions to enhance the value of health care, we examined the choice of study design used by evaluators to study the interventions for which we received usable responses to our questionnaire. To determine the type of study design used to assess the effect of interventions, we reviewed source documents and questionnaire responses. (See app. III for more information on study designs.) Some interventions reported results from multiple studies. In these cases, we identified each type of study design used to assess the intervention. We used this information to find the number of interventions that were assessed using more rigorous study designs such as randomized controlled trials and the number of interventions that were assessed using less rigorous study designs such as pre/post or cross sectional studies. Our approach is designed to assist decision makers and policy analysts in assessing the strengths and limitations of evidence provided to them about the effects of health care interventions on quality of care and costs. Our approach does not involve the performance of systematic reviews that could synthesize information about those effects from multiple studies. Nor does it attempt to describe a process for producing a numerical or qualitative rating of the methodological strength of a study along one or more specified dimensions. Rather, our approach emphasizes the questions that decision makers and policy analysts should ask and leaves open the format and content of the answers to those questions. To examine factors that can facilitate the implementation and replication of health care interventions that seek to enhance value, we analyzed data collected from respondents through our questionnaire. We reviewed key literature sources and interviewed experts to identify seven factors that may affect implementation including leadership support, organizational culture, and resources. Respondents were asked to indicate, from the list of close-ended categorical options, to what degree each of the seven factors facilitated or impeded implementation and to provide an open-ended explanation of how the factors facilitated or impeded implementation. We asked respondents who were familiar with the replication of their intervention to explain if and how the factors differed from site to site. Respondents were also asked to indicate the expected degree of importance that each factor could have in attempting to replicate the intervention as widely as possible and to explain why these factors were expected to be important. In addition to the factors identified through our literature review, we asked respondents to identify and describe up to three additional factors that facilitated or impeded implementation of their intervention or that would be important for wide-scale replication. All close-ended responses were analyzed by assessing the frequency distribution of responses for each factor. We conducted a content analysis on open-ended responses to identify common explanations of how these factors affected implementation and why these factors would be important for widespread replication of the intervention. As part of our analysis of factors that may affect implementation and replication, we examined differences in questionnaire responses by the intervention type. To determine the types of interventions for which we received usable questionnaire responses, we reviewed source documents and questionnaire responses for each intervention and assigned them to one of eight categories (see app. II for more information about intervention type). To categorize interventions by type we assessed key intervention characteristics, including the population targeted for behavior change and levers or activities used to change the way health care services are delivered. For example, a hospital surgical team that implemented a checklist was categorized as a patient safety improvement intervention. Some interventions exhibited key characteristics of more than one type of intervention. For example, a primary care practice that implemented a nurse case manager to facilitate care transitions and employ disease management strategies exhibits key characteristics of both care coordination or transition of care programs and chronic condition management interventions. Interventions that exhibited key characteristics of more than one type of intervention were categorized in all appropriate types. To determine if the effect or expected degree of importance of the factors differed by the type of intervention, we assessed the frequency distribution of responses for each factor across intervention type. Although our efforts to identify relevant interventions for our study were extensive, we could not ensure that every intervention meeting our selection criteria had been identified. Therefore the results from our questionnaire are limited in scope to the 127 interventions for which we received usable responses, and cannot be generalized to all value-enhancing health care interventions. Interventions that seek to alter provider behavior by systematically changing the basis for provider payments. Interventions that seek to alter patient behavior by restructuring health insurance plan provisions or related health care benefits. Interventions that seek to improve care for patients with chronic conditions.  Can be implemented in either inpatient or outpatient settings.  Can focus on patient or clinician activities, Interventions that seek to prevent or reduce adverse events caused by medical care. Adverse events include improper prescriptions or administration of medications, health-care associated infections, and pressure sores. Interventions that facilitate patient transfers from one setting to another. Some focus on coordination of patient care provided by multiple providers. Providing a single payment, or bundled payment, for all health care services that are delivered for a defined episode of care or a specified period of time. Providing physician group practices performance payments if the practice meets or exceeds performance targets. Insurers offer enrollees a tiered network of providers. Enrollees who choose a provider in the higher cost tier pay higher premiums or cost sharing than enrollees who choose a provider in a lower cost tier. Enrollees are charged a lower or no copay for specific drugs that are part of a recommended medical regimen for a medical condition. A nurse-social worker team is introduced into a primary care practice to provide education, help patients improve self management skills, and develop care plans with patients. A multidisciplinary team holds classes for children with severe asthma and their parents to address physical needs and group, individual and family therapy for psychological needs. A surgical team implements a check list that enhances team communication and situational awareness among clinicians to prevent wrong-site surgeries. A program of patient risk assessments, specialist consultations, and new equipment is designed to minimize pressure sores. An advanced practice nurse and a trained elder peer provide support to older adults who are discharged home after a heart attack or undergoing bypass surgery to encourage compliance with medications and lifestyle changes. A team of nurses and social workers work with patients with multiple chronic conditions to coordinate care from multiple providers and to provide ongoing monitoring and referrals. A hospital created teams trained in “lean” principles, based on Toyota’s manufacturing approach, to identify where changes in routine procedures could reduce waste and increase efficiency. Interventions that seek to change health care organization as a whole through ongoing and iterative reassessment of health care practices. Such interventions seek to both reduce inefficiency or waste and improve patient outcomes. The primary goal is to improve health by forestalling the development of illness in the first place. Programs to promote wellness activities and health screenings or to prevent falls. These interventions do not include programs to prevent adverse events. Interventions that seek to ensure that clinical staff adhere to specified treatment protocols or other forms of standardized practices. These interventions seek to modify care processes by changing where care is delivered, how care is organized or structured, or who delivers care.  Multi-site intensive care unit telemedicine program. A team of clinicians use a four-step mobility protocol to regularly assess the functional and clinical status of intensive care unit patients with respiratory failure. The methodological literature on assessing the effect of interventions places a major emphasis on study design for identifying those studies that have the capacity to assess an intervention’s effect on an outcome. The key strength of rigorous study designs is that they can take account of other factors that could affect the outcome of interest, and thereby isolate the effect of the intervention itself. Randomized controlled trials (RCTs) are widely considered to be among the most rigorous types of study designs because their basic structure inherently minimizes the potential impact of confounding factors on their results. RCTs accomplish this by randomly allocating study participants to groups that either receive the intervention—generally referred to as intervention or treatment groups—or do not receive the intervention—the control groups. The consequence of random allocation is that the only systematic difference between study participants in the two groups is exposure to the intervention. Thus, the effect of all other factors is the same on the two groups and therefore neutralized in making comparisons between the intervention and control groups. A second design type, known as the controlled before and after study, can be used in situations where the random allocation of study participants between intervention and control groups required for an RCT is not feasible. Controlled before and after studies use data collected from separate treatment and control groups, both before and after the intervention’s implementation, to help to separate the effect of the intervention from that of other factors at work over that time period. In this design type, the control group is generally chosen in a way that is likely to produce a group that is broadly similar to the treatment group prior to the implementation of the intervention. However, methodologists generally recommend an explicit analysis to compare the intervention and control groups used in controlled before and after studies in order to demonstrate that they were in fact similar before the intervention took place. A third design type, an interrupted time series study, is not based on a comparison of intervention and control groups. Instead, it tracks an outcome of interest over time with measurements taken at many different time points both before and after the intervention. The multiple data points from before the implementation of the intervention enable analysts to take account of the impact of other factors on the outcome and thereby isolate the intervention’s effect on that outcome. The interrupted time series design works best when there are data from a substantial number of different time points, both before and after implementation of the intervention. Other types of study designs cannot isolate the effect of an intervention from that of other factors because they provide no separate information on what would have happened without the intervention. For example, in a simple pre/post study all one has is a measurement of the outcome before implementation of the intervention and a measurement of the outcome after the intervention. The observed difference reflects all the factors (including the intervention) affecting the outcome over that time period. Because confounding factors could potentially affect the outcome in either the same or the opposite direction as the intervention, the actual effect of the intervention itself could be either greater or smaller than the simple pre/post difference. Even the direction of the intervention’s effect, to increase or decrease the outcome, could be the opposite of the overall change from pre to post. That is why the results of a pre/post study generally cannot be relied on to provide even an approximation of what the likely effect of a health care intervention is on quality of care and costs. The following three tables provide a set of questions that are intended to help policymakers and others find the information needed to assess the strengths and limitations of evidence drawn from studies of health care interventions that seek to enhance value relating to their impact on quality of care and costs. The three tables focus on the three broad dimensions described in the body of this report: (1) the credibility of evidence that attributes changes in quality of care and costs to the intervention, (2) the applicability of study results for broader populations of interest, and (3) the intervention’s capacity for widespread replication. Each table lists a series of questions that highlight key information for assessing the evidence produced by relevant studies along with guidance on how to look for that information in published reports. Answers to most of these questions may be found in relevant sections of those reports; if not, one can ask the investigators who conducted the studies. While this set of questions is selective and does not cover every potential methodological issue, the information it calls for should provide policymakers a basis for making an informed assessment of the overall credibility and scope of the available evidence regarding the apparent impact of these interventions on quality of care and costs, as well as the demonstrated capacity of those interventions for widespread replication. In addition to the individual named above, Jessica Farb, Assistant Director; Kristin Ekelund; Krister Friday; Katie Mack; and Eric Peterson made key contributions to this report.
The U.S. has devoted an increasing proportion of its economy and federal budget to the provision of health care services, but high levels of spending do not guarantee good care. Policymakers, health practitioners, and others have implemented numerous health care interventions that make discrete changes in the organization of health care services in order to enhance the value of health care--that is, improve the quality of care while reducing costs. Examples include programs to reduce bloodstream infections and to coordinate patient care following hospital discharges. This report (1) examines the availability of evidence on the effect of selected interventions on quality of care and costs; (2) identifies key dimensions for assessing the strength of such evidence; and (3) examines factors that can facilitate the implementation and replication of health care interventions. GAO identified a broad and diverse set of health care interventions using published and unpublished sources. For 127 of those interventions, GAO analyzed responses to a questionnaire that it sent to persons knowledgeable about available information on the effect of that particular intervention on quality of care and costs. GAO's questionnaire also asked respondents to assess the relative importance of seven factors in the implementation and potential replication of the health care intervention. In addition, GAO consulted the methodological literature and experts on assessing evidence on the effects of health care interventions. About half of the respondents to our questionnaire reported some information on the effect of an intervention on both quality of care and costs--the two types of data needed to determine whether or to what extent a particular intervention enhanced the value of health care. Overall, the vast majority of our respondents reported at least some information on the observed effect of the intervention on quality of care. Relatively fewer--though still over half--of our respondents reported at least some information on the effect of the intervention on costs. Whether or not policymakers can rely on information that indicates an intervention enhances value depends on the strength of the underlying evidence about quality and cost effects. From studies on the effect of health care interventions on quality of care and costs, policymakers and others can assess the strength and limitations of available evidence along three dimensions. One, the credibility of evidence on the effect of health care interventions on quality of care and costs depends primarily on whether those studies apply rigorous study designs. Two, the applicability of the results of studies to a broader population depends on the extent to which the study population is representative of that larger population. Finally, the capacity of health care interventions for widespread replication can be examined in terms of the consistency of results obtained by each intervention across diverse health care organizational contexts. Respondents reported, generally by large margins, that leadership support as well as other factors, such as organizational culture and staff resources, significantly facilitated implementation. However, respondents were more divided when asked about the reported effect that health IT had on implementation, and most respondents reported that financial incentives were not a factor in the implementation of the intervention. A majority of respondents reported that each of these factors, with the exception of financial incentives, would be either very or somewhat important if one were to attempt to replicate the intervention as widely as possible. Progress in achieving greater value in the U.S. health care system will depend, in part, on the availability of information regarding the effect of interventions on quality of care and costs and on how policymakers and others assess and use that information. Information can guide the choices of policymakers among multiple interventions vying for support, but those decisions will have a sounder basis if the information meets certain criteria regarding its content and strength of evidence. At least some information on both cost and quality effects was available for about half of the interventions GAO examined. However, for many interventions the credibility of this information was put into question by widespread reliance on studies that did not incorporate rigorous designs that could isolate the effect of an intervention from other factors. We requested comments from the Department of Health and Human Services, but none were provided.
Each fiscal year, the Millennium Challenge Act requires MCC to select countries as eligible for MCA assistance by identifying candidate countries, establishing an eligibility methodology, and making eligibility determinations. MCC evaluates eligible countries’ proposals and negotiates compacts, which must be approved by the MCC board. The Threshold Program assists countries that are not deemed eligible but show a commitment to MCA objectives. MCC is governed by a board of directors consisting of U.S. government and other representatives. For fiscal year 2004, the Millennium Challenge Act limited candidates to low-income countries—those with per capita incomes less than or equal to the International Development Association (IDA) cutoff for that year ($1,415)—that also were eligible for IDA assistance. This provision limited candidacy in the MCA’s first year to the poorest low-income countries. For fiscal year 2005, candidates were required only to have incomes less than or equal to the IDA ceiling for that year ($1,465). Additionally, for fiscal years 2004 and 2005, candidates could not be ineligible for U.S. economic assistance under the Foreign Assistance Act of 1961. (See app. II for a list of candidate countries for fiscal years 2004 and 2005.) The Millennium Challenge Act requires that the MCC board base its eligibility decisions, “to the maximum extent possible,” on objective and quantifiable indicators of a country’s demonstrated commitment to the criteria enumerated in the act. MCC selected its indicators based on their relationship to growth and poverty reduction, the number of countries they cover, their transparency and public availability, and their relative soundness and objectivity. For fiscal years 2004 and 2005, MCC’s process for determining country eligibility for MCA assistance had both a quantitative and a discretionary component (see fig. 1). MCC first identified candidate countries that performed above the median in relation to their peers on at least half of the quantitative indicators in each of the three policy categories—Ruling Justly, Investing in People, and Encouraging Economic Freedom—and above the median on the indicator for control of corruption. (See app. III for a table describing the indicators, listing their sources, and summarizing the methodologies on which they are based.) In addition, MCC considered other relevant information—in particular, whether countries that scored substantially below the median (at the 25th percentile or lower) on an indicator were addressing any shortcomings related to that indicator. MCC also considered supplemental information to address gaps, lags, or other data weaknesses as well as additional material information. Encouraging Economic Freedom 11. Country credit rating 12. One-year consumer 8. Public primary education spending (as a percent of GDP) 9. Public expenditure on health (as a percent of GDP) The Millennium Challenge Act requires that, within 5 days of the board’s eligibility determinations, the MCC Chief Executive Officer submit a report to congressional committees containing a list of the eligible countries and “a justification for such eligibility determination” and publish the report in the Federal Register. Eligible countries are invited to submit compact proposals, which are to be developed in consultation with members of civil society, including the private sector and NGOs. However, a country’s eligibility does not guarantee that MCC will sign and then fund a compact with that country. MCC is to sign compacts only with national governments. Under the act, the duration of compacts is limited to a maximum of 5 years; MCC expects to approve compacts with durations of 3 to 5 years. MCA funds are not earmarked for specific projects or countries, and money not obligated in the fiscal year for which it was appropriated can be used in subsequent fiscal years. For fiscal years 2004 and 2005, Congress has directed that MCC use its existing appropriations to fully fund a compact—that is, obligate the entire amount anticipated for the compact’s duration. Funding for compacts and the Threshold Program must be drawn from the appropriation for the fiscal year in which the country was eligible. MCC aims to be among the largest donors in recipient countries, which, according to MCC officials, creates incentive for eligible countries to “buy into” MCC’s principles of policy reform, sustainable economic growth, country partnership, and results. The Millennium Challenge Act authorizes a limited amount of assistance to certain candidate countries to help them become eligible for MCA assistance. These candidate countries must (1) meet the fiscal year 2004 or 2005 requirements for MCA candidacy and (2) demonstrate a significant commitment to meeting the act’s eligibility criteria but fail to meet those requirements. MCC has implemented these legislative provisions as its Threshold Program. Figure 2 compares features of MCC compact and Threshold Program assistance; appendix IV describes the Threshold Program. MCC has broad authority under the Millennium Challenge Act to enter into contracts and business relationships. The act establishes the MCC Board of Directors and assigns it a key decision-making role in the corporation’s activities, including those related to implementing the compact program. The act also makes provisions for the board to consult with Congress and provide general supervision of MCC’s IG. The board consists of the Secretary of State (Board Chair), the Secretary of the Treasury (Vice Chair), the USAID Administrator, and the U.S. Trade Representative, in addition to MCC’s Chief Executive Officer. The board has four other positions filled by Presidential appointment with the approval of the Senate. Two of these positions have been filled. (For a timeline of key events and milestones since MCC’s launch, see app. V.) For fiscal years 2004 and 2005, the MCC board based its determinations of countries’ eligibility on its quantitative indicator methodology as well on discretion. Although MCC published the countries’ indicator scores at its Web site, some of the indicator source data used to generate the scores were not readily available. Finally, we found that reliance on the indicators carried certain inherent limitations. MCC used the 16 quantitative indicators, as well as the discretion implicit in the Millennium Challenge Act, to select 17 countries as eligible for MCA compact assistance for fiscal years 2004 and 2005 (see fig. 3). Fiscal year 2004: In May 2004, the MCC board selected 16 countries as eligible for fiscal year 2004 funding. The countries deemed eligible include 13 that met the quantitative indicator criteria and 3 that did not (Bolivia, Georgia, and Mozambique). Another 6 countries met the criteria but were not deemed eligible. Fiscal year 2005: In October 2004, the MCC board selected 16 countries as eligible for fiscal year 2005 funding. The countries deemed eligible included 14 countries that met the indicator criteria and 2 countries that did not (Georgia and Mozambique). Ten countries met the criteria but were not deemed eligible. Fifteen of the 16 countries also had been deemed eligible for fiscal year 2004; the only new country was Morocco. MCC did not provide Congress its justifications for the 13 countries that met the indicator criteria but were not deemed eligible for fiscal years 2004 and 2005 (one of these countries, Tonga, did not score substantially below the median on any indicator). The act does not explicitly require MCC to include a justification to Congress for why these countries were not deemed eligible. In addition, our analysis of countries that met the indicator criteria but were not deemed eligible suggests that, besides requiring that a country score above the median on the indicator for control of corruption, MCC placed particular emphasis on three Ruling Justly indicators (political rights, civil liberties, and voice and accountability) in making its eligibility determinations. In fiscal years 2004 and 2005, 6 of the 13 countries that met the indicator criteria but were not deemed eligible had scores equal to or below the median on these three indicators. On the other hand, the 13 countries that were not deemed eligible performed similarly to the eligible countries on the other three Ruling Justly indicators—government effectiveness, rule of law, and control of corruption—as well as on the indicators for Investing in People and Encouraging Economic Freedom. Although MCC published its country scores for all of the indicators at its Web site, some of the indicator source data used to generate the scores were not readily available to the public. We found that source data for nine of the indicators were accessible via hyperlinks from MCC’s Web site, making it possible to compare those data with MCC’s published country scores. However, for the remaining seven indicators, we encountered obstacles to locating the source data, without which candidate countries and other interested parties would be unable to reproduce and verify MCC’s results. Primary education completion rates: The published indicators were created with data from several sources and years, and not all of these data were available on line. Primary education and health spending (percentage of gross domestic product): When national government data were unavailable, MCC used either country historical data or data from the World Bank to estimate current expenditures. Diphtheria and measles immunization rate: The general hyperlink at the MCC Web site did not link to the data files used to create the published indicators. One-year consumer price inflation: The published indicators were created with a mix of data from several data sources and different years. Fiscal policy: The published indicators were created with International Monetary Fund (IMF) data that are not publicly available. Days to start a business: Updated indicators were not published until after the board had made its fiscal year 2004 eligibility decisions. MCC’s use of the quantitative indicator criteria in the country selection process for fiscal years 2004 and 2005 involved the following inherent difficulties: Owing to measurement uncertainty, the scores of 17 countries may have been misclassified as above or below the median. In fiscal years 2004 and 2005, 7 countries did not meet the quantitative indicator criteria because of corruption scores below the median, but given measurement uncertainty their true scores may have been above the median. Likewise, 10 countries met the indicator criteria with corruption scores above the median, but their true scores may have been below the median. Missing data for the days to start a business and trade policy indicators reduced the number of countries that could achieve above-median scores for those indicators. For fiscal years 2004 and 2005, 20 and 22 countries, respectively, lacked data for the indicator for days to start a business, and 18 and 13 countries, respectively, lacked data for the trade policy indicator. Our analysis suggests that missing data for these two indicators may have reduced the number of countries that passed the Encouraging Economic Freedom category. The narrow and undifferentiated range of possible scores for the political rights, civil liberties, and trade policy indicators led to clustering—“bunching”—of scores around the median, making the scores less useful in distinguishing among countries’ performances. In fiscal year 2005, for example, 46 countries, or two-thirds of the countries with trade policy data, received a score of 4 (the median) or 5 (the lowest score possible) for trade policy. Our analysis suggests that bunching potentially reduced the number of countries that passed the Ruling Justly and Economic Freedom categories and limited MCC’s ability to determine whether countries performed substantially below their peers in affected indicators. With respect to the indicator for control of corruption, countries deemed eligible for MCA compact assistance represent the best performers among their peers; at the same time, studies have found that, in general, countries with low per capita income also score low on corruption indexes. Of the 17 MCA compact eligible countries, 11 ranked below the 50th percentile among the 195 countries rated by the World Bank Institute for control of corruption; none scored in the top third. MCC has received compact proposals, concept papers, or both, from 16 countries; of these, it has approved a compact with one country and is negotiating with four others. At the same time, MCC continues to refine its process for reviewing and assessing compact proposals. As part of this process, MCC has identified elements of country program implementation and fiscal accountability that can be adapted to eligible countries’ compact objectives and institutional capacities. Between August 2004 and March 2005, MCC received compact proposals, concept papers, or both, from 16 MCA compact-eligible countries, more than half of which submitted revised proposal drafts in response to MCC’s assessments. In March 2005, MCC approved a 4-year compact with Madagascar for $110 million to fund rural projects aimed at enhancing land titling and security, increasing financial sector competition, and improving agricultural production technologies and market capacity; MCC and Madagascar signed the compact on April 18, 2005. MCC is negotiating compacts with Cape Verde, Georgia, Honduras, and Nicaragua and is conducting in-depth assessments of proposals from two additional countries. Figure 4 summarizes the types of projects that eligible countries have proposed and that MCC is currently reviewing. The countries’ initial proposals and concept papers requested about $4.8 billion; those that MCC is currently reviewing (see fig. 4) and negotiating request approximately $3 billion over 3 to 5 years. Our analysis—based on MCC’s goal of being a top donor as well as Congress’s requirement that the corporation fund compacts in full—shows that the $2.4 billion available from fiscal year 2004 and 2005 appropriations will allow MCC to fund between 4 and 14 compacts, including Madagascar’s compact, for those years. MCC’s $110 million compact with Madagascar, averaging $27.5 million per year, would make it the country’s fifth largest donor (see app. VI for a list of the largest donors to MCA compact-eligible countries in fiscal years 2002-2003). As of April 2005, MCC is continuing to refine its process for developing compacts. According to MCC officials, the compact development process is open ended and characterized by ongoing discussions with eligible countries. According to a recent IG report, MCC’s negotiating a compact with Madagascar has served as a prototype for completing compacts with other countries. At present, the compact proposal development and assessment process follows four steps (see fig. 5). Step 1: Proposal development. MCC expects eligible countries to propose projects and program implementation structures, building on existing national economic development strategies. For instance, the Honduran government’s proposal is based on its Poverty Reduction Strategy Paper (PRSP) and a subsequent June 2004 implementation plan. MCC also requires that eligible countries use a broad-based consultative process to develop their proposals. MCC staff discuss the proposal with country officials during this phase of compact development. Although MCC does not intend to provide funding to countries for proposal development, some countries have received grants from regional organizations for proposal development. Step 2: Proposal submission and initial assessment. Eligible countries submit compact proposals or concept papers. MCC has not specified deadlines for proposal submission or publicly declared the limits or range of available funding for individual compacts. According to MCC officials, the absence of deadlines and funding parameters permits countries to take initiative in developing proposals. However, according to U.S.-based NGOs, the lack of deadlines has caused some uncertainty and confusion among eligible country officials. Honduran officials told us that knowing a range of potential funding would have enhanced their ability to develop a more focused proposal. During this stage, MCC conducts a preliminary assessment of the proposal, drawing on its staff, contractors, and employees of other U.S. government agencies. This assessment examines the potential impact of the proposal’s strategy for economic growth and poverty reduction, the consultative process used to develop the proposal, and the indicators for measuring progress toward the proposed goals. According to MCC, some eligible countries have moved quickly to develop their MCC programs. Others initially were unfamiliar with MCC’s approach and some faced institutional constraints. MCC works with these countries to develop programs that it can support. In addition, MCC is exploring ways—such as providing grants—to facilitate compact development and implementation. Once MCC staff determine that they have collected sufficient preliminary information, they seek the approval of MCC’s Investment Committee to conduct a more detailed analysis, known as due diligence. Step 3: Detailed proposal assessment and negotiation. MCC’s due diligence review includes an analysis of the proposed program’s objectives and its costs relative to potential economic benefits. Among other things, the review also examines the proposal’s plans for program implementation, including monitoring and evaluation; for fiscal accountability; and for coordination with USAID and other donors. In addition, the review considers the country’s commitment to MCC eligibility criteria and legal considerations pertaining to the program’s implementation. During their review, MCC staff seek the approval of the Investment Committee to notify Congress that the corporation intends to initiate compact negotiations; following completion of the review, MCC staff request the committee’s approval to enter compact negotiations. When the negotiations have been concluded, the Investment Committee decides whether to approve submission of the compact text to the MCC board. Step 4: Board review and compact signing. The MCC board reviews the compact draft. Before the compact can be signed and funds obligated, the board must approve the draft and MCC must notify appropriate congressional committees of its intention to obligate funds. MCC has identified several broadly defined elements of program implementation and fiscal accountability that it considers essential to ensuring achievement of compact goals and proper use of MCC funds. As signatories to the compact, MCC and the country government will be fundamental elements of this framework. However, MCC and eligible countries can adapt other elements (see fig. 6) by assigning roles and responsibilities to governmental and other entities according to the countries’ compact objectives and institutional capacities. Madagascar’s compact incorporates these elements in addition to an advisory council composed of private sector and civil society representatives, as well as local and regional government officials. The compact also requires that MCA-Madagascar, the oversight entity, adopt additional plans and agreements before funds can be disbursed, including plans for fiscal accountability and procurement. In addition, the compact requires the adoption of a monitoring and evaluation plan; provides a description of the plan’s required elements; and establishes performance indicators for each of Madagascar’s three program objectives, which are linked to measures of the program’s expected overall impact on economic growth and poverty reduction. MCC expects to disburse funds in tranches as it approves Madagascar’s completed plans and agreements. According to the IG, MCC officials expect to make the initial disbursements within 2 months after signing the compact. MCC has received advice and support from USAID, State, Treasury, and USTR and has signed agreements with five U.S. agencies for program implementation and technical assistance. In addition, MCC is consulting with other donors in Washington, D.C., and in the field to use existing donor expertise. MCC is also consulting with U.S.-based NGOs as part of its domestic outreach effort; however, some NGOs raised questions about the involvement of civil society groups. (See app. VII for more details of MCC’s coordination efforts.) MCC initially coordinated primarily with U.S. agencies on its board and is expanding its coordination efforts to leverage the expertise of other agencies. USAID and the Department of State in Washington, D.C., and in compact-eligible countries, have facilitated meetings between MCC officials and donors and representatives of the private sector and NGOs in eligible countries. In addition, several of the six USAID missions contacted by GAO reported that their staff had provided country-specific information, had observed MCC-related meetings between civil society organizations and governments, or had informed other donors about MCC. MCC has also coordinated with the Department of the Treasury and USTR. For example, according to MCC officials, MCC has regularly briefed these agencies on specific elements of compact proposals and established an interagency working group to discuss compact-related legal issues. Since October 2004, MCC has expanded its coordination through formal agreements with five U.S. agencies, including the Census Bureau, Army Corps of Engineers, and Department of Agriculture, that are not on the MCC board. MCC has obligated more than $6 million for programmatic and technical assistance through these agreements, as shown in figure 7. MCC has received information and expertise from key multilateral and bilateral donors in the United States and eligible countries. For example, World Bank staff have briefed MCC regarding eligible countries, and officials from the Inter-American Development Bank said that they have provided MCC with infrastructure assessments in Honduras. According to MCC, most donor coordination is expected to occur in eligible countries rather than at the headquarters level. In some cases, MCC is directly coordinating its efforts with other donors through existing mechanisms, such as a G-17 donor group in Honduras. In addition to soliciting donor input, MCC officials have encouraged donors not to displace assistance to countries that receive MCA funding. Donors in Honduras told us that MCA funding to that country is unlikely to reduce their investment, because sectors included in the country’s proposal have additional needs that would not be met by MCA. According to MCC officials, MCC is holding monthly meetings with a U.S.- based NGO working group and hosted five public meetings in 2004 in Washington, D.C, as part of its domestic outreach efforts. The NGOs have shared expertise in monitoring and evaluation and have offered suggestions that contributed to the modification of 1 of MCC’s 16 quantitative indicators. In addition, MCC has met with local NGOs during country visits. Some U.S-based NGOs have raised questions about the involvement of NGOs in this country and of civil society groups in compact-eligible countries. Environmental NGOs told us in January 2005 that MCC had not engaged with them since initial outreach meetings; however, MCC subsequently invited NGOs and other interested entities to submit proposals for a quantitative indicator of a country’s natural resources management. Representatives of several NGOs commented that MCC lacks in-house expertise and staff to monitor and assess civil society participation in compact development. In addition, U.S.-based NGOs expressed concern that their peers in MCA countries have not received complete information about the proposal development process. Since starting up operations, MCC has made progress in developing key administrative infrastructures that support its program implementation. MCC has also made progress in establishing corporatewide structures for accountability, governance, internal control, and human capital management, including establishing an audit and review capability through its IG, adopting bylaws, providing ethics training to employees, and expanding its permanent full-time staff. However, MCC has not yet completed plans, strategies, and time frames needed to establish these essential management structures on a corporatewide basis. (See fig. 8 for a detailed summary of MCC’s progress.) During its first 15 months, MCC management focused its efforts on establishing essential administrative infrastructures—the basic systems and resources needed to set up and support its operations—which also contribute to developing a culture of accountability and control. In February 2004, MCC acquired temporary offices in Arlington, Virginia, and began working to acquire a permanent location. In addition, consistent with its goal of a lean corporate structure with a limited number of full- time employees, MCC outsourced administrative aspects of its accounting, information technology, travel, and human resource functions. Further, MCC implemented various other administrative policies and procedures to provide operating guidance to staff and enhance MCC’s internal control. MCC management continues to develop other corporate policies and procedures, including policies that will supplement federal travel and acquisition regulations. Accountability requires that a government organization effectively demonstrate, internally and externally, that its resources are managed properly and used in compliance with laws and regulations and that its programs are achieving their intended goals and outcomes and are being provided efficiently and effectively. Important for organizational accountability are effective strategic and performance planning and reporting processes that establish, measure, and report an organization’s progress in fulfilling its mission and meeting its goals. External oversight and audit processes provide another key element of accountability. During its initial 15 months, MCC developed and communicated to the public its mission, the basic tenets of its corporate vision, and key program-related decisions by the MCC board. MCC began its strategic planning process when key staff met in January 2005 to begin setting strategic objectives and it expects to issue the completed plan in the coming months. In addition, MCC arranged with its IG for the audit of its initial year financial statements (completed by an independent public accounting firm) and for two program-related IG reviews. However, to date, MCC has not completed a strategic plan or established specific implementation time frames. In addition, MCC has not yet established annual performance plans, which would facilitate its monitoring of progress toward strategic and annual performance goals and outcomes and its reporting on such progress internally and externally. According to MCC officials, MCC intends to complete its comprehensive strategic and performance plans by the end of fiscal year 2005. Corporate governance can be viewed as the formation and execution of collective policies and oversight mechanisms to establish and maintain a sustainable and accountable organization while achieving its mission and demonstrating stewardship over its resources. Generally, an organization’s board of directors has a key role in corporate governance through its oversight of executive management, corporate strategies, risk management and audit and assurance processes, and communications with corporate stakeholders. During its initial 15 months, the MCC board adopted bylaws regarding board composition and powers, meetings, voting, fiscal oversight, and the duties and responsibilities of corporate officers and oversaw management’s efforts to design and implement the compact program. According to MCC, during a recent meeting of the board to discuss corporate governance, the Chief Executive Officer solicited feedback from the board regarding defining and improving the governance process. MCC’s board established a compensation committee in March 2005, and a charter for the committee is being drafted. In addition, MCC is preparing, for board consideration, a policy on the board’s corporate governance. As drafted, the policy identifies the board’s statutory and other responsibilities, elements of board governance, rules and procedures for board decision-making, and guidelines for MCC’s communications with the board. With regard to MCC board membership, seven of the nine board members have been appointed and installed. Through board agency staff, MCC staff have regularly informed board members—four of whom are heads of other agencies or departments—about pending MCC matters. The board has not completed a comprehensive strategy or plan for carrying out its responsibility—specifically, it has not defined the board’s and management’s respective roles in formulating and executing of corporate strategies, developing risk management and audit and assurance processes, and communicating and coordinating with corporate stakeholders. Moreover, although the bylaws permit the board to establish an audit committee—to support the board in accounting and financial reporting matters; determine the adequacy of MCC’s administrative and financial controls; and direct the corporation’s audit function, which is provided by the IG and its external auditor—the board has not yet done so. Finally, two of the MCC board’s four other positions have not yet been filled. Internal control provides reasonable assurance that key management objectives—efficiency and effectiveness of operations, reliability of financial reporting, and compliance with applicable laws and regulations— are being achieved. Generally, a corporatewide internal control strategy is designed to create and maintain an environment that sets a positive and supportive attitude toward internal control and conscientious management; assess, on an ongoing basis, the risks facing the corporation and its programs from both external and internal sources; implement efficient control activities and procedures intended to effectively manage and mitigate areas of significant risk; monitor and test control activities and procedures on an ongoing basis; assess the operating effectiveness of internal control and report and address any weaknesses. During its first 15 months, MCC took several actions that contributed to establishing effective internal control. Although it did not conduct its own assessment of internal control, MCC management relied on the results of the IG reviews and external financial audit to support its conclusion that key internal controls were valid and reliable. Further, MCC implemented processes for identifying eligible countries and internal controls through its due diligence reviews of proposed compacts, establishment of the Investment Committee to assist MCC staff in negotiating and reviewing compact proposals, and the board’s involvement in approving negotiated compacts. In addition, MCC instituted an Ethics Program, covering employees as well as outside board members, to provide initial ethics orientation training for new hires and regularly scheduled briefings for employees on standards of conduct and statutory rules. In April 2005, MCC officials informed us that they had recently established an internal controls strategy group to identify internal control activities to be implemented over the next year, reflecting their awareness of the need to focus MCC’s efforts on the highest-risk areas. However, MCC has not completed a comprehensive strategy and related time frames for ensuring the proper design and incorporation of internal control into MCC’s corporatewide program and administrative operations. For example, MCC intends to rely on contractors for a number of operational and administrative services; however, this strategy will require special consideration in its design and implementation of specific internal controls. Cornerstones of human capital management include leadership; strategic human capital planning; acquiring, developing, and retaining talent; and building a results-oriented culture. In its initial year, MCC human capital efforts focused primarily on establishing an organizational structure and recruiting employees necessary to support program design and implementation and corporate administrative operations (see app. VIII for a diagram of MCC’s organizational structure). MCC set short- and longer- term hiring targets, including assigning about 20 employees—depending on the number and types of compacts that have been signed—to work in MCA compact-eligible countries; it also identified needed positions and future staffing levels through December 2005 based on its initial operations. With the help of an international recruiting firm, MCC expanded its permanent full-time staff from 7 staff employees in April 2004 to 107 employees in April 2005; it intends to employ no more than 200 permanent full-time employees by December 2005 (see fig. 9). In addition, MCC hired 15 individuals on detail, under personal services contracts, or as temporary hires, as well as a number of consultants. Finally, in January 2005, MCC hired a consultant to design a compensation program to provide employees with pay and performance incentives and competitive benefits, including performance awards and bonuses, retention incentives, and student loan repayments. MCC officials told us that they intend the program to be comparable with those of federal financial agencies, international financial institutions, and multilateral and private sector organizations. Fifteen of these positions are administratively determined; Congress authorized 30 such positions for MCC in the Millennium Challenge Act. In its first 15 months, MCC took important actions to design and implement the compact program—making eligibility determinations, defining its compact development process, and coordinating and establishing working agreements with key stakeholders. MCC also acted to establish important elements of a corporatewide management structure needed to support its mission and operations, including some key internal controls. However, MCC has not yet fully developed plans that define the comprehensive actions needed to establish key components of an effective management structure. We believe that, to continue to grow into a viable and sustainable entity, MCC needs to approve plans with related time frames that identify the actions required to build a corporatewide foundation for accountability, internal control, and human capital management and begin implementing these plans. In addition, MCC’s board needs to define its responsibilities for corporate governance and oversight of MCC and develop plans or strategies for carrying them out. As MCC moves into its second year of operations, it recognizes the need to develop comprehensive plans and strategies in each of these areas. Implementation of such plans and strategies should enable MCC’s management and board to measure progress in achieving corporate goals and objectives and demonstrate its accountability and control to Congress and the public. As part of our ongoing work for your committee, we will continue to monitor MCC’s efforts in these areas. We recommend that the Chief Executive Officer of the Millennium Challenge Corporation complete the development and implementation of overall plans and related time frames for actions needed to establish 1. Corporatewide accountability, including implementing a strategic plan, establishing annual performance plans and goals, using performance measures to monitor progress in meeting both strategic and annual performance goals, and reporting internally and externally on its progress in meeting its strategic and annual performance goals. 2. Effective internal control over MCC’s program and administrative operations, including establishing a positive and supportive internal control environment; a process for ongoing risk assessment; control activities and procedures for reducing risk, such as measures to mitigate risk associated with contracted operational and administrative services; ongoing monitoring and periodic testing of control activities; and a process for assessing and reporting on the effectiveness of internal controls and addressing any weaknesses identified. 3. An effective human capital infrastructure, including a thorough and systematic assessment of the staffing requirements and critical skills needed to carry out MCC’s mission; a plan to acquire, develop, and retain talent that is aligned with the corporation’s strategic goals; and a performance management system linking compensation to employee contributions toward the achievement of MCC’s mission and goals. We recommend that the Secretary of State, in her capacity as Chair of the MCC Board of Directors, ensure that the board considers and defines the scope of its responsibilities with respect to corporate governance and oversight of MCC and develop an overall plan or strategy, with related time frames, for carrying out these responsibilities. In doing so, the board should consider, in addition to its statutory responsibilities, other corporate governance and oversight responsibilities commonly associated with sound and effective corporate governance practices, including oversight of the formulation and execution of corporate strategies, risk management and audit and assurance processes, and communication and coordination with corporate stakeholders. MCC provided technical comments on a draft of this statement and agreed to take our recommendations under consideration; we addressed MCC’s comments in the text as appropriate. We also provided the Departments of State and Treasury, the U.S. Agency for International Development, and the Office of the U.S. Trade Representative an opportunity to review a draft of this statement for technical accuracy. State and USAID suggested no changes, and Treasury and USTR provided a few technical comments, which we incorporated as appropriate. Mr. Chairman and Members of the Committee, this concludes my prepared statement. I will be happy to answer any questions you may have. For questions regarding this testimony, please call David Gootnick at (202) 512-4128 or Phillip Herr at (202) 512-8509. Other key contributors to this statement were Todd M. Anderson, Beverly Bendekgey, David Dornisch, Etana Finkler, Ernie Jackson, Debra Johnson, Joy Labez, Reid Lowe, David Merrill, John Reilly, Michael Rohrback, Mona Sehgal, and R.G. Steinman. We reviewed MCC’s activities in its first 15 months of operations, specifically its (1) process for determining country eligibility for fiscal years 2004 and 2005, (2) progress in developing compacts, (3) coordination with key stakeholders, and (4) establishment of management structures and accountability mechanisms. To examine MCC’s country selection process, we analyzed candidate countries’ scores for the 16 quantitative indicators for fiscal years 2004 and 2005, as well as the selection criteria for the fiscal year 2004 Threshold Program. We used these data to determine the characteristics of countries that met and did not meet the indicator criteria and to assess the extent to which MCC relied on country scores for eligibility determination. We also reviewed the source data for the indicator scores posted on MCC’s Web site to identify issues related to public access and to determine whether we could reproduce the country scores from the source data. Our review of the source data methodology, as well as the documents of other experts, allowed us to identify some limitations of the indicator criteria used in the country selection process. For these and other data we used in our analyses, we examined, as appropriate, the reliability of the data through interviews with MCC officials responsible for the data, document reviews, and reviews of data collection and methodology made available by the authors. We determined the data to be reliable for the purposes of this study. To describe MCC’s process for developing compacts, including plans for monitoring and evaluation, we reviewed MCC’s draft or finalized documents outlining compact proposal guidance, compact proposal assessment, and fiscal accountability elements. We reviewed eligible countries’ compact proposals and concept papers to identify proposed projects, funding, and institutional frameworks, among other things. To summarize the projects that countries have proposed and that MCC is currently assessing, we developed categories and conducted an analysis of countries’ proposal documents and MCC’s internal summaries. We also reviewed Madagascar’s draft compact to identify projects, funding, and framework for program implementation and fiscal accountability. We met with MCC officials to obtain updates on the compact development process. In addition, we interviewed representatives of nongovernmental organizations (NGOs) in Washington, D.C., and Honduras, as well as country officials in Honduras, to obtain their perspectives on MCC’s compact development process. To assess MCC’s coordination with key stakeholders, we reviewed interagency agreements to identify the types of formal assistance that MCC is seeking from U.S. agencies and the funding that MCC has set aside for this purpose. We also reviewed MCC documents to identify the organizations, including other donors, with which MCC has consulted. In addition, we interviewed MCC officials regarding their coordination with various stakeholders. We met with officials from the U.S. agencies on the MCC board (Departments of State and Treasury, USAID, and USTR) to assess the types of assistance that these agencies have provided to MCC. We also contacted six USAID missions in compact-eligible countries to obtain information on MCC coordination with U.S. agencies in the field. To assess MCC’s coordination with NGOs and other donors, we met with several NGOs, including InterAction, the World Wildlife Fund, and the Women’s Edge Coalition in Washington, D.C., and local NGOs in Honduras; we also met with officials from the Inter-American Development Bank in Washington, D.C., and Honduras, as well as officials from the World Bank, Central American Bank for Economic Integration, and several bilateral donors in Honduras. Finally, we attended several MCC public outreach meetings in Washington, D.C. To analyze MCC’s progress in establishing management structures and accountability mechanisms, we interviewed MCC senior management and reviewed available documents to identify the management and accountability plans that MCC had developed or was planning to develop. We reviewed audit reports by the USAID Office of the Inspector General to avoid duplication of efforts. We used relevant GAO reports and widely used standards and best practices, as applicable, to determine criteria for assessing MCC’s progress on management issues as well as to suggest best practices to MCC in relevant areas. Although our analysis included gaining an understanding of MCC’s actions related to establishing internal control, we did not evaluate the design and operating effectiveness of internal control at MCC. In January 2005, we conducted fieldwork in Honduras, one of four countries with which MCC had entered into negotiations at that time, to assess MCC’s procedures for conducting compact proposal due diligence and its coordination with U.S. agencies, local NGOs, Honduran government officials, and other donors. In conducting our field work, we met with U.S. mission officials, Honduran government officials, donor representatives, and local NGOs. We also visited some existing USAID projects in the agricultural sector that were similar to projects that Honduras proposed. We provided a draft of this statement to MCC, and we have incorporated technical comments where appropriate. We also provided a draft of this statement to the Departments of State and Treasury, USAID, and USTR; State and USAID suggested no changes, and Treasury and USTR provided technical comments, which we addressed as appropriate. We conducted our work between April 2004 and April 2005, in accordance with generally accepted government auditing standards. (continued) Lesotho Madagascar Malawi Mali Mauritania Moldova Mongolia Morocco Mozambique Nepal Nicaragua Niger Nigeria Pakistan Papua New Guinea Paraguay Philippines Rwanda São Tomé and Principe Senegal Sierra Leone Solomon Islands Sri Lanka Swaziland Tajikistan Tanzania Togo *Tonga Turkmenistan Uganda Ukraine Vanuatu Vietnam Yemen Republic Zambia * Candidate for FY 2004 only. ** Prohibited under Foreign Assistance Act in FY 2004 but not in FY 2005. Table 1 lists each of the indicators used in the MCA compact and threshold country selection process, along with its source and a brief description of the indicator and the methodology on which it is based. Since announcing the 16 quantitative indicators that it used to determine country eligibility for fiscal year 2004, MCC made two changes for fiscal year 2005 and is exploring further changes for fiscal year 2006. To better capture the gender concerns specified in the Millennium Challenge Act, MCC substituted “girls’ primary education completion rate” for “primary education completion rate.” It also lowered the ceiling for the inflation rate indicator from 20 to 15 percent. In addition, to satisfy the act’s stipulation that MCC use objective and quantifiable indicators to evaluate a country’s commitment to economic policies that promote sustainable natural resource management, MCC held a public session on February 28, 2005, to launch the process of identifying such an indicator. MCC expects to complete the process by May 2005. The MCC board used objective criteria (a rules-based methodology) and exercised discretion to select the threshold countries (see fig. 10). For fiscal year 2004, the MCC board relied on objective criteria in selecting as Threshold Program candidates countries that needed to improve in 2 or fewer of the 16 quantitative indicators used to determine MCA eligibility. (That is, by improving in two or fewer indicators, the country would score above the median on half of the indicators in each policy category, would score above the median on the corruption indicator, and would not score substantially below the median on any indicator.) MCC identified 15 countries that met its stated criteria and selected 7 countries to apply for Threshold Program assistance. Our analysis suggests that one of these seven countries did not meet MCC’s stated Threshold Program criteria. The MCC board also exercised discretion in assessing whether countries that passed this screen also demonstrated a commitment to undertake policy reforms to improve in deficient indicators. For fiscal year 2005, the MCC did not employ a rules-based methodology for selecting Threshold Program candidates. Instead, the board selected Threshold Program and MCA compact-eligible countries simultaneously. The board selected 12 countries to apply for Threshold Program assistance, including reconfirming the selection of 6 countries that also had qualified for the fiscal year 2004 Threshold Program. Figure 11 illustrates key events and defining actions relating to MCC since the passage of the Millennium Challenge Act in January 2004. MCC plans to be among the top donors in MCA compact-eligible countries. Figure 12 shows the total official development assistance net (average for 2002 and 2003) provided by the top three donors as well as the amount of total official development assistance net (average for 2002 and 2003) provided by all donors in each of the MCA compact-eligible countries. As the figure indicates, based on the average for the years 2002-2003, the United States was the top donor in Armenia, Bolivia, Georgia, and Honduras and was among the top five donors in nine additional countries. MCC is coordinating its program and funding activities with various stakeholders to keep them informed and to utilize their expertise or resources at headquarters and in the field (see fig. 13). In addition, several U.S. agencies have taken steps to coordinate their activities with MCC. Within each of the eight functional areas shown in figure 14, the actual staffing level as of April 2005 appears in the pie chart in each box and the planned staffing level by December 2005 appears in the right corner of each box.
In January 2004, Congress established the Millennium Challenge Corporation (MCC) to administer the Millennium Challenge Account. MCC's mission is to promote economic growth and reduce extreme poverty in developing countries. The act requires MCC to rely to the maximum extent possible on quantitative criteria in determining countries' eligibility for assistance. MCC will provide assistance primarily through compacts--agreements with country governments. MCC aims to be one of the top donors in countries with which it signs compacts. For fiscal years 2004 and 2005, Congress appropriated nearly $2.5 billion for the Millennium Challenge Corporation; for fiscal year 2006, the President is requesting $3 billion. GAO was asked to monitor MCC's (1) process for determining country eligibility, (2) progress in developing compacts, (3) coordination with key stakeholders, and (4) establishment of management structures and accountability mechanisms. For fiscal years 2004 and 2005, the MCC board used the quantitative criteria as well as judgment in determining 17 countries to be eligible for MCA compacts. Although MCC chose the indicators based in part on their public availability, our analysis showed that not all of the source data for the indicators were readily accessible. In addition, we found that reliance on the indicators carried certain inherent limitations, such as measurement uncertainty. Between August 2004 and March 2005, MCC received compact proposals, concept papers, or both, from 16 eligible countries. It signed a compact with Madagascar in April 2005 and is negotiating compacts with four countries. MCC's 4-year compact with Madagascar for $110 million would make it the country's fifth largest donor. MCC is continuing to refine its compact development process. In addition, MCC has identified elements of program implementation and fiscal accountability that can be adapted to eligible countries' compact objectives and institutional capacities. MCC is taking steps to coordinate with key stakeholders to use existing expertise and conduct outreach. The U.S. agencies on the MCC Board of Directors--USAID, the Departments of State and Treasury, and the Office of the U.S. Trade Representative--have provided resources and other assistance to MCC, and five U.S. agencies have agreed to provide technical assistance. Bilateral and multilateral donors are providing information and expertise. MCC is also consulting with nongovernmental organizations in the United States and abroad as part of its outreach activities. MCC has made progress in developing key administrative infrastructures that support its mission and operations. MCC has also made progress in establishing corporatewide structures for accountability, governance, internal control, and human capital management, including establishing an audit capability through its Inspector General, adopting bylaws, providing ethics training to employees, and expanding its permanent full-time staff. However, MCC has not yet completed comprehensive plans, strategies, and related time frames for establishing these essential management structures and accountability mechanisms on a corporatewide basis.
The DRC is a vast, mineral-rich nation with an estimated population of about 75 million people and an area that is roughly one-quarter the size of the United States. Since its independence in 1960, the DRC has undergone political upheavals, including a civil war. Eastern DRC, in particular, has continued to be plagued by violence, including sexual violence against women and children, perpetrated by armed groups and some members of the Congolese national military. Some of the adjoining countries in the region have also experienced recent turmoil, which has led to flows of large numbers of refugees and internally displaced persons into the DRC. For example, the United Nations High Commissioner for Refugees (UNHCR) estimated that as of mid-2013 there were around 2.6 million internally displaced persons living in camps or with host families in the DRC. Various industries, particularly manufacturing industries, use the four conflict minerals in a wide variety of products. For example, tin is used to solder metal pieces and is also found in food packaging, in steel coatings on automobile parts, and in some plastics. Most tantalum is used to manufacture tantalum capacitors, which enable energy storage in electronic products such as cell phones and computers, and to produce alloy additives, which can be found in turbines in jet engines. Tungsten is used in automobile manufacturing, drill bits and cutting tools, and other industrial manufacturing tools and is the primary component of filaments in light bulbs. Gold is used as a reserve and in jewelry and is used by the electronics industry. As we have previously reported, conflict minerals are mined in various locations around the world. Over the past decade, Congress has focused on issues related to the DRC. In 2006, Congress passed the Democratic Republic of the Congo Relief, Security, and Democracy Promotion Act of 2006, stating that U.S. policy is to engage with governments working for peace and security throughout the DRC and holding accountable any individuals, entities, and countries working to destabilize the country. In 2011, State and USAID developed the U.S. Strategy to Address the Linkages between Human Rights Abuses, Armed Groups, Mining of Conflict Minerals and Commercial Products (the strategy). The SEC conflict minerals disclosure rule outlines a three-step process for companies to follow, as applicable, to comply with the rule. Broadly, the process falls into three steps requiring a company to (1) determine whether the rule applies to it; (2) conduct a reasonable country of origin inquiry (RCOI) concerning the origin of conflict minerals used; and (3) exercise due diligence, if appropriate, to determine the source and chain of custody of conflict minerals used. (App. II depicts SEC’s flowchart summary of the rule). Of the 1,321 companies that filed conflict minerals disclosures in 2014, the sample of filings that we reviewed indicates that almost all of the companies conducted an RCOI and a majority of them exercised due diligence, but most reported that they were unable to determine the country of origin of conflict minerals they had used in 2013. Company representatives we interviewed cited difficulties in obtaining information from suppliers. According to our analysis, an estimated 67 percent reported that they were unable to determine the 4 percent reported that conflict minerals came from Covered 24 percent reported that conflict minerals did not originate in 2 percent reported that conflict minerals came from scrap or 3 percent did not provide a clear determination. According to our estimate, just about all of the companies that filed conflict minerals disclosures reported that they conducted an RCOI, with 96 percent of them reporting that they conducted a survey of their suppliers to try to obtain information about whether they used conflict minerals, the country of origin of those conflict minerals, and the processor of the conflict minerals. Based on some of the filings that we reviewed and interviews with company representatives, in general, companies used a supplier survey and industry template to conduct their RCOIs. A challenge noted by representatives of some companies we spoke with was that they received incomplete information from suppliers, limiting their ability to determine the source and chain of custody of the conflict minerals they used in 2013. We should note that in a July 2013 report, we found that a company’s supply chain can involve multiple tiers of suppliers. As a result, a request for information from a company could go through many suppliers, as figure 1 illustrates, delaying the communication of information to the company. For example, as we noted in our 2013 report, companies required to report under the rule could submit the inquiries to their first-tier suppliers. Those suppliers could either provide the reporting company with sufficient information or initiate the inquiry process up the supply chain, such as by distributing the inquiries to suppliers at the next tier— tier 2 suppliers. The tier 2 suppliers could inquire up the supply chain to additional suppliers, until the inquiries arrived at the smelter. Smelters could then provide the suppliers with information about the origin of the conflict minerals. Representatives of some companies that we spoke with told us that they were making efforts to address concerns about the lack of information on the country of origin of conflict minerals they had used. Our analysis shows that the exercise of due diligence on the source and chain of custody of conflict minerals yielded little or no additional information, beyond the RCOI, regarding the country of origin of conflict minerals or whether the conflict minerals that companies used in 2013 in their products benefited or financed armed groups in the Covered Countries. The estimated 4 percent of the companies who determined that the necessary conflict minerals used in their products originated from Covered Countries could not determine whether such conflict minerals financed or benefitted armed groups during the reporting period, even though they disclosed that they conducted due diligence on the source and chain of custody of conflict minerals they used. State and USAID officials reported that they are implementing the U.S. conflict minerals strategy they submitted to Congress in 2011 through specific actions that address the strategy’s five key objectives. Both State and USAID officials in Washington and the region reiterated that the strategy and its five key objectives remain relevant. The following summarizes our findings about each objective: Promote an Appropriate Role for Security Forces (Objective 1). Some of the reported actions being undertaken by the International Organization for Migration (IOM), a USAID implementing partner, are helping to lessen the involvement of the military and increasing the role of legitimate DRC government stakeholders in mining areas. For example, USAID reported that IOM has assisted with the planning and demilitarization of mine sites in eastern DRC through leading a multi-sector stakeholder process of mine validation to ensure that armed groups and criminal elements of the Congolese military are not active in eastern DRC mines. As we previously reported, according to UN, U.S., and foreign officials and NGO representatives, some members of the Congolese national military units are consistently and directly involved in human rights abuses against the civilian population in eastern DRC and are involved in the exploitation of conflict minerals and other trades. Enhance Civilian Regulation of the DRC Minerals Trade (Objective 2). USAID reported that it is undertaking a number of actions, through implementing partners, to enhance civilian regulation and traceability of the DRC minerals trade. For example, USAID reported funding TetraTech, a technical services company, to (1) build the capacity for responsible minerals trade in the DRC, (2) strengthen the capacity of key actors in the conflict minerals supply chain, and (3) advance artisanal and mining rights. In addition, USAID indicated that it is funding IOM to support DRC infrastructure and regulatory reform. According to an IOM official we spoke with in the region, IOM also provides the DRC government with information on which mines should be suspended from the conflict-free supply chain based on safety and human rights violations. During our visit to the region, we met with a USAID official and representatives of local human rights organizations who told us that the implementation of traceability schemes is contributing to positive outcomes. For example, in some cases, according to USAID, local miners earn double the price for certified conflict-free minerals compared to non-certified illegal minerals, which is more than they would earn from smuggling (see app. III, figs. 1 and 2). Protect Artisanal Miners and Local Communities (Objective 3). State and USAID reported several programs through their implementing partners, aimed at protecting artisanal miners and local communities and providing alternative livelihoods. For example, State reported that it funded an implementing partner for anti-human-trafficking initiatives as well as to promote alternative livelihoods and improve workers’ rights in the artisanal mining sector. According to State, these efforts aimed to reduce the vulnerability of men and women in local communities. In addition, USAID has funded an implementing partner to promote community conflict mitigation and conflict minerals monitoring structures at local levels. According to USAID, artisanal mining provides survival incomes to Congolese throughout the country but it is particularly significant in eastern DRC, where roughly 500,000 people directly depend on artisanal mining for their income. These miners work under very difficult safety, health, and security conditions and almost always within an illicit environment. Moreover, as we observed during our visits to the mines in the region, artisanal mining is a physically demanding activity requiring the use of rudimentary techniques and little or no industrial capacity (see app. III, figs. 3 and 4). Strengthening Regional and International Efforts (Objective 4). U.S. diplomatic and capacity building initiatives have reportedly helped strengthen international efforts. For example, USAID said it is working with TetraTech to build the capacity of the International Conference on the Great Lakes Region (ICLGR), an intergovernmental organization. According to USAID, this effort supports the implementation and coordination of regional countries’ efforts to promote monitoring, certification, and traceability of mine sites. A TetraTech representative we met with in the region told us that TetraTech is also organizing workshops for educating and raising awareness about regional certification in ICGLR countries. According to officials we interviewed from the United Nations Organization Stabilization Mission in the Democratic Republic of the Congo (MONUSCO) and the ICGLR, as well as local officials, U.S. diplomacy has increased awareness about conflict minerals and improved coordination in the region. Some of these officials described State and USAID actions to strengthen regional and international efforts as the most effective in the region. Promote Due Diligence and Responsible Trade through Public Outreach (Objective 5). State and USAID reported engaging in various efforts to reach out to industry associations, NGOs, international organizations, and regional entities to help promote due diligence and responsible trade in conflict minerals. For example, State and USAID reported that they leveraged private sector interest to establish the Public-Private Alliance for Responsible Minerals Trade to support supply chain solutions to conflict minerals challenges in the region. The alliance includes State, USAID, and representatives from U.S. end-user companies, industry associations, NGOs, and ICGLR, among others. In addition, State is engaged with the Conflict-Free Sourcing Initiative (CFSI) and State and USAID both participate in the biannual Organization for Economic Co-operation and Development, UN Group of Experts (UNGOE), and ICGLR forums. According to State and USAID officials, these efforts promote continued engagement with industry officials and civil society groups and encourage due diligence and strengthening of conflict-free supply chains. A USAID official in the region told us that teams of private sector executives, hosted by State and USAID officials, have visited eastern DRC and Rwanda mining sites on several occasions, reinforcing the executives’ commitment to source minerals responsibly. In addition, a State official noted that some private companies have been active in providing feedback on certification and traceability mechanisms. Although State and USAID officials provided some examples of results associated with their actions, the agencies face difficult operating conditions that complicate efforts to address the connection between human rights abuses, armed groups, and the mining of conflict minerals. We have described some of these challenges in our previous reports but, as we observed during our fall 2014 visit to the region, numerous challenges continue to exist. First, the mining areas in eastern DRC continue to be plagued by insecurity because of the presence and activities of illegal armed groups and some corrupt members of the national military. In 2010, we reported extensively on the presence of illegal armed groups, such as the Democratic Forces for the Liberation of Rwanda or Forces Democratiques de Liberation du Ruwanda, and some members of the Congolese military and the various ways in which they were involved in the exploitation of the conflict minerals sector in eastern DRC. In 2013, the Peace and Security Cooperation Framework signed by 11 regional countries noted that eastern DRC has continued to suffer from recurring cycles of conflict and persistent violence. Although U.S. agency and Congolese officials informed us during our recent fieldwork in the region that a large number of mines had become free of armed groups (referred to as green mines), MONUSCO officials we met with in the DRC also told us that armed groups and some members of the Congolese military were still active in other mining areas. Specifically, MONUSCO officials described two fundamental ways in which armed groups continued to be involved in conflict minerals activities: directly, by threatening and perpetrating violence against miners to confiscate minerals from them; and indirectly, by setting up checkpoints on trade routes to illegally tax miners and traders. As we noted in our 2010 report, U.S. agency and UN officials and others believe that the minerals trade in the DRC cannot be effectively monitored, regulated, or controlled as long as armed groups and some members of the Congolese national military continue to commit human rights violations and exploit the local population at will. As we reported in 2010, U.S. government officials and others indicated that weak governance and lack of state authority in eastern DRC constitute a significant challenge. As we noted then, according to UN officials, if Congolese military units are withdrawn from mine sites, civilian DRC officials will need to monitor, regulate, and control the minerals trade. We also noted that effective oversight of the minerals sector would not occur if civilian officials in eastern DRC continued to be underpaid or not paid at all, as such conditions easily lead to corruption and lack of necessary skills to perform their duties. Evidence shows that this situation has not changed much. U.S. agencies and an implementing partner, as well as some Congolese officials, told us that there are not enough trained civilians to effectively monitor and take control of the mining sector. ICGLR officials we met with highlighted the importance of a regional approach to addressing conflict minerals and indicated that governments’ capacity for and interest in participating in regional certification schemes varies substantially, making it difficult to implement credible, common standards. Corruption continues to be a challenge in the mining sector. For example, a member of the UN Group of Experts told us that smuggling remains prolific and that instances of fraud call into question the integrity of traceability mechanisms. This official stated that tags used to certify minerals as conflict-free are easily obtained and sometimes sold illegally in the black market. According to USAID officials, USAID is working to introduce a pilot traceability system to increase transparency, accountability, and competition in the legal artisanal mining sector. According to U.S. government officials and officials from local and civil society in the region that we met with, lack of state authority bolsters armed group activity and precludes public trust in the government. Poor infrastructure, including poorly maintained or nonexistent roads, makes it difficult for mining police and other authorities to travel in the region and monitor mines for illegal armed group activity. In our 2010 report, we reported that the minerals trade cannot be effectively monitored, regulated, and controlled unless civilian DRC officials, representatives from international organizations, and others can readily access mining sites to check on the enforcement of laws and regulations and to ensure visibility and transparency at the sites. As shown by the photograph in app. III, fig. 5, during our recent visit to the region, poor road conditions made travel to the mines very challenging. Chairman Huizenga, Ranking Member Moore, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff have any questions about this testimony, please contact Kimberly Gianopoulos, Director, International Affairs and Trade, at (202) 512-8612 or GianopoulosK@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Godwin Agbara (Assistant Director), Marc Castellano (Analyst-in-Charge), Jeffrey Baldwin-Bott, Debbie Chung, Stephanie Heiken, Andrew Kurtzman, Grace Lui, and Jasmine Senior. SEC Conflict Minerals Rule: Initial Disclosures Indicate Most Companies Were Unable to Determine the Source of Their Conflict Minerals. GAO-15-561. Washington, D.C.: August 18, 2015. Conflict Minerals: Stakeholder Options for Responsible Sourcing Are Expanding, but More Information on Smelters Is Needed. GAO-14-575. Washington, D.C.: June 26, 2014. SEC Conflict Minerals Rule: Information on Responsible Sourcing and Companies Affected, GAO-13-689. Washington D.C.: July 18, 2013. Conflict Minerals Disclosure Rule: SEC’s Actions and Stakeholder- Developed Initiatives. GAO-12-763. Washington, D.C.: July 16, 2012. The Democratic Republic of Congo: Information on the Rate of Sexual Violence in War-Torn Eastern DRC and Adjoining Countries. GAO-11-702. Washington, D.C.: July 13, 2011. The Democratic Republic of the Congo: U.S. Agencies Should Take Further Action to Contribute to the Effective Regulation and Control of the Minerals Trade in Eastern Democratic Republic of the Congo. GAO-10-1030. Washington, D.C.: September 30, 2010. We took the following photographs in the Democratic Republic of the Congo, Burundi, and Rwanda during fieldwork for our August 2015 report.
This testimony summarizes the information contained in GAO's August 2015 report, entitled SEC Conflict Minerals Rule: Initial Disclosures Indicate Most Companies Were Unable to Determine the Source of Their Conflict Minerals , (GAO-15-561) . According to a generalizable sample GAO reviewed, company disclosures filed with the Securities and Exchange Commission (SEC) for the first time in 2014 in response to the SEC conflict minerals disclosure rule indicated that most companies were unable to determine the source of their conflict minerals. Companies that filed disclosures used one or more of the four “conflict minerals”—tantalum, tin, tungsten, and gold—determined by the Secretary of State to be financing conflict in the Democratic Republic of the Congo (DRC) or adjoining countries. Most companies were based in the United States (87 percent). Almost all of the companies (99 percent) reported performing country-of-origin inquiries for conflict minerals used. Companies GAO spoke to cited difficulty obtaining necessary information from suppliers because of delays and other challenges in communication. Most of the companies (94 percent) reported exercising due diligence on the source and chain of custody of conflict minerals used. However, most (67 percent) were unable to determine whether those minerals came from the DRC or adjoining countries (Covered Countries), and none could determine whether the minerals financed or benefited armed groups in those countries. Companies that disclosed that conflict minerals in their products came from covered countries (4 percent) indicated that they are or will be taking action to address the risks associated with the use and source of conflict minerals in their supply chains. For example, one company indicated that it would notify suppliers that it intends to cease doing business with suppliers who continue to source conflict minerals from smelters that are not certified as conflict-free. a Covered Countries: Angola, Burundi, Central African Republic, the Democratic Republic of the Congo, the Republic of the Congo, Rwanda, South Sudan, Tanzania, Uganda, and Zambia. Department of State (State) and U.S. Agency for International Development (USAID) officials reported taking actions to implement the U.S. conflict minerals strategy, but a difficult operating environment complicates this implementation. The agencies reported supporting a range of initiatives including validation of conflict-free mine sites and strengthening traceability mechanisms that minimize the risk that minerals that have been exploited by illegal armed groups will enter the supply chain. As a result, according to the agencies, 140 mine sites have been validated and competition within conflict-free traceability systems has benefited artisanal miners and exporters. Implementation of the U.S conflict minerals strategy faces multiple obstacles outside the control of the U.S. government. For example, eastern DRC is plagued by insecurity because of the presence of illegal armed groups and some corrupt members of the national military, weak governance, and poor infrastructure.
The draft proposed “Working for America Act” is intended to ensure that agencies are equipped to better manage, develop, and reward employees to better serve the American people. Its purpose is to establish a federal human capital system under which employees have clear performance goals and opportunities for professional growth; managers who help them succeed; and pay increases based on performance rather than the passage of time. In addition, any new flexibilities are to be exercised in accordance with the merit system principles; related core values; and protections, such as against discrimination, political influence, and personal favoritism, of the civil service. Today I will provide observations on three central areas of the draft proposal as we understand it: pay and performance management; OPM’s new responsibilities to implement the proposed pay reform; and labor management relations and adverse actions and appeals. As I stated earlier, GAO strongly supports the need to expand pay reform in the federal government and believes that implementing more market-based and performance-oriented pay systems is both doable and desirable. The federal government’s current pay system is weighted toward rewarding length of service rather than individual performance and contributions; automatically providing across-the-board annual pay increases, even to poor performers. It also compensates employees living in various localities without adequately considering the local labor market rates applicable to the diverse types of occupations in the area. Importantly, the draft proposal, as we understand it, incorporates many of the key practices of more market-based and performance-oriented pay systems and requires that OPM certify that each agency’s pay for performance system meet prescribed criteria. Going forward, OPM should define in regulation what fact-based and data-driven analyses agencies will need to provide to OPM to receive certification. Clearly, a competitive compensation system can help organizations attract and retain a quality workforce. To begin to develop such a system, organizations assess the skills and knowledge they need; compare compensation against other public, private, or nonprofit entities competing for the same talent in a given locality; and classify positions along various levels of responsibility. In addition, organizations generally structure their competitive compensation systems to separate base salary from bonuses and other incentives and awards. Under the draft proposal, OPM is to design a new core classification and pay system and agencies, in coordination with OPM, are to establish performance appraisal systems to promote high performance. Specifically, the General Schedule is to be repealed and to replace it, OPM is to establish pay bands for occupational groups based on factors such as mission, competencies, or relevant labor market features. For each pay band, OPM is to establish ranges of basic pay rates that apply in all locations. There are to be market-oriented pay adjustments. The governmentwide national market adjustment is to vary by occupational group and band with the flexibility to make additional local market adjustments. Going forward, more information is needed on what compensation studies are to be conducted in setting these market-based pay rates. Effective performance management systems can be a vital tool for aligning the organization with desired results and creating a “line of sight” showing how team, unit, and individual performance can contribute to overall organizational results. Such systems work to achieve three key objectives: (1) they strive to provide candid and constructive feedback to help individuals maximize their contribution and potential in understanding and realizing the goals and objectives of the organization, (2) they seek to provide management with the objective and fact-based information it needs to reward top performers, and (3) they provide the necessary information and documentation to deal with poor performers. The draft proposal incorporates many of the key practices that we have reported have helped agencies implement effective performance management systems. These practices include: Linking Organizational Goals to Individual Performance. Under the draft proposal, agencies are to set performance expectations that support and align with the agencies’ mission and strategic goals, organizational program and policy objectives, annual performance plans, results, and other measures of performance. Further, agencies are to communicate the performance expectations in writing at the beginning of the appraisal period. Making Meaningful Distinctions in Performance. Supervisors and managers are to be held accountable for making meaningful distinctions among employees based on performance, fostering and rewarding excellent performance, and addressing poor performance, among other things. Agencies are not to impose a forced distribution of performance ratings in terms of fixed numeric or percentage limitations on any summary rating levels. Performance appraisal systems are to include at least two summary rating levels, essentially a “pass/fail” system, for employees in an “Entry/Developmental” band and at least three summary rating levels for other employee groups. Pass/fail systems by definition will not provide meaningful distinctions in performance ratings. In addition, while a three-level system might be workable, using four or five summary rating levels is preferable since it naturally allows for greater performance rating and pay differentiation. Moreover, this approach is consistent with the new governmentwide performance-based pay system for the members of the Senior Executive Service (SES), which requires agencies to use at least four summary rating levels to provide a clear and direct link between SES performance and pay as well as to make meaningful distinctions based on relative performance. Cascading this approach to other levels of employees can help agencies recognize and reward employee contributions and achieve the highest levels of individual performance. Linking Pay to Performance. Employees must receive at least a “fully successful” rating to receive any pay increase. Those employees who receive less than a fully successful rating are not to receive an increase, including the national and local market adjustments discussed above. Performance pay increases for employees are to be allocated by the “performance shares” of a pay pool. Agencies are to determine the value of one performance share, expressed as a percentage of the employee’s basic pay or as a fixed dollar amount. There are to be a set number of performance shares for each pay pool so that the employees with higher performance ratings are to receive a greater number of shares and thus, a greater payout. At the agency’s discretion, any portion of the employee’s performance pay increase not converted to a basic pay increase may be paid out as a lump-sum payment. Providing Adequate Safeguards to Ensure Fairness and Guard Against Abuse. Agencies are to incorporate effective safeguards to ensure that the management of systems is fair and equitable and based on employee performance in order to receive certification of their pay for performance systems. We have found that a common concern that employees express about any pay for performance system is whether their supervisors have the ability and willingness to assess employees’ performance fairly. Using safeguards, such as having independent reasonableness reviews of performance management decisions before such decisions are final, can help to allay these concerns and build a fair and credible system. This has been our approach at GAO and we have found it works extremely well. In addition, agencies need to assure reasonable transparency and provide appropriate accountability mechanisms in connection with the results of the performance management process. This can include publishing internally the overall results of performance management and individual pay decisions while protecting individual confidentiality. For example, we found that several of OPM’s demonstration projects publish information for employees on internal Web sites that include the overall results of performance appraisal and pay decisions, such as the average performance rating, the average pay increase, and the average award for the organization and for each individual unit. GAO is also publishing aggregate data for all of our pay, promotion, and other important agency-wide human capital actions. As I noted, before implementing any human capital reforms, executive branch agencies should follow a phased approach that meets a “show me” test. That is, each agency should be authorized to implement a reform only after it has shown it has met certain requirements, including an assessment of its institutional infrastructure and an independent certification by OPM of the existence of this infrastructure. This institutional infrastructure includes (1) a strategic human capital planning process linked to the agency’s overall strategic plan; (2) capabilities to design and implement a new human capital system effectively; (3) a modern, effective, credible, and validated performance management system that provides a clear linkage between institutional, unit, and individual performance-oriented outcomes, and results in meaningful distinctions in ratings; and (4) adequate internal and external safeguards to ensure the fair, effective, and nondiscriminatory implementation of the system. A positive feature of the draft proposal is that agencies are to show that their pay for performance systems have met prescribed criteria in order to receive certification from OPM to implement their new systems. Among these criteria are having the means for ensuring employee involvement in the design and implementation of the pay for performance system; adequate training and retraining for supervisors, managers, and employees in the implementation and operation of the pay for performance system; a process for ensuring periodic performance feedback and dialogue between supervisors, managers, and employees throughout the appraisal period; and the means for ensuring that adequate agency resources are allocated for the design, implementation, and administration of the pay for performance system. Further, OPM may review an agency’s pay for performance systems periodically to assess whether they continue to meet the certification criteria. If they do not, OPM may rescind the agency’s certification and direct the agency to take actions to implement an appropriate system, which the agency must follow. Going forward, I believe that OPM should define in regulation what it will take in terms of fact-based and data-driven analyses for agencies to demonstrate that they are ready to receive this certification. Clearly, the President’s Management Agenda, and its standards for the strategic management of human capital, can inform the certification process. Also, as an example of the analyses that have been required, OPM has outlined in regulations for the SES performance-based pay system the necessary data and information agencies need to provide in order to receive certification and thus raise the pay cap and total compensation limit for their senior executives. Specifically, agencies must provide, among other things, the data on senior executives’ performance ratings, pay, and awards for the last 2 years to demonstrate that their systems, as designed and applied, make meaningful distinctions based on relative performance. Under the SES regulations, agencies that cannot provide these data can request provisional certification of their systems. In our view such provisional certifications should not be an option under any broad-based classification and compensation reform proposal. OPM should play a key leadership and oversight role in helping individual agencies and the government as a whole work towards overcoming a broad range of human capital challenges. Our understanding of the Administration’s draft proposal is that OPM’s leadership and oversight role is to expand in several areas, such as establishing a more market-based and performance-oriented pay system governmentwide and implementing a new core classification system. At the request of Chairman Collins and Ranking Member Lieberman, Senate Committee on Homeland Security and Governmental Affairs, along with Chairman Voinovich and Ranking Member Akaka, Subcommittee on Oversight of Government Management, the Federal Workforce, and the District of Columbia, and to assist Congress as it considers OPM’s additional responsibilities as outlined in this draft proposal, we are assessing OPM’s current capacity to lead a broad-based governmentwide human capital reform effort, including providing appropriate assistance to federal agencies as they revise their human capital systems and conducting effective monitoring of any related reform implementation efforts. OPM is in the process of its own transformation—from being a rulemaker, enforcer, and independent agent to being more of a consultant, toolmaker, and strategic partner in leading and supporting executive agencies’ human capital reform efforts and management systems. However, it is unclear whether OPM has the current capacity to discharge its new responsibilities. Specifically, OPM reported in its June 2001 workforce analysis that 4.2 percent of its employees (about 123 per year), on average, were projected to retire each year over the next 10 years, and the largest percentage of projected retirements, about 8 percent each year, would come from members of its SES. OPM’s expected retirement rate for its workforce overall is more than the annual retirement rate of 2 percent governmentwide that we identified in a report issued in 2001. Our prior work has shown that when required to implement new legislation, OPM could have done more to accomplish its leadership and oversight mission in a decentralized human capital environment. For example, Congress passed a law in 1990 authorizing agencies to repay, at their discretion, their employees’ student loans as a means to recruit and retain a talented workforce. In 2001, OPM issued final regulations to implement the program. The regulations were subsequently changed in 2004 to reflect legislative amendments that increased the ceiling on annual and total loan repayments. In our review of the federal student loan repayment program, we found that while human capital officials recognized OPM’s efforts, they felt they could use more assistance on the technical aspects of operating the program, more coordination in sharing lessons learned in implementing it, and help consolidating some of the program processes. Similarly, we found that while OPM had several initiatives underway to assist federal agencies in using personnel flexibilities currently available to them in managing their workforces, OPM could more fully meet its leadership role to assist agencies in identifying, developing, and applying human capital flexibilities across the federal government. In addition, we reported that in its ongoing internal review of its existing regulations and guidance, OPM could more directly focus on determining the continued relevance and utility of its regulations and guidance by asking whether they provide the flexibility that agencies need in managing their workforces while also incorporating protections for employees. The Administration’s draft proposal would amend some provisions of Title 5 of the U.S. Code covering labor management relations and adverse actions and appeals. Selected federal agencies have been implementing more market-based and performance-oriented pay for some time—some organizations for well over a decade—and thus they have built a body of experience and knowledge about what works well and what does not that allows the sharing of lessons learned. On the other hand, the federal government has had far less experience in changes regarding labor management relations and adverse actions and appeals. Congress granted DHS and DOD related new authorities in these areas and may wish to monitor the implementation of those authorities, including lessons learned, before moving forward for the rest of the federal government. Discussion of selected proposed amendments follows. Under Title 5, agencies now have a duty to bargain over conditions of employment, other than those covered by a federal statute; a governmentwide rule or regulation; or an agency rule or regulation for which the agency can demonstrate a compelling need. Under the draft proposal, agencies are to be obligated to bargain with employees only if the effect of the change in policy on the bargaining unit (or the affected part of the unit) is “foreseeable, substantial, and significant in terms of impact and duration.” In addition, an agency now has the right to take any action to carry out the agency’s mission in an emergency, without a duty to bargain. However, what constitutes an emergency can be defined through a collective bargaining agreement. Under the draft proposal, an agency is to have the right to take any action to prepare for, practice for, or prevent an emergency, or to carry out the agency’s mission in an emergency. The draft proposal also adds a new definition of “emergency” as requiring immediate action to carry out critical agency functions, including situations involving an (1) adverse effect on agency resources, (2) increase in workload because of unforeseeable events, (3) externally imposed change in mission requirements, or (4) externally imposed budget exigency. By broadly defining “emergency” without time limits and adding to management’s right an explicit authority to take action to prepare for, practice for, or prevent any emergency, the proposed change as we understand it, could serve to significantly restrict the scope of issues subject to collective bargaining. Under Title 5, conduct-based adverse actions are reviewed by the Merit Systems Protection Board (MSPB) under the preponderance of the evidence standard (there is more evidence than not to support the action). Performance-based adverse actions are reviewed under the lower standard of substantial evidence (evidence that a reasonable person would find sufficient to support a conclusion), but agencies must first give employees a reasonable opportunity to demonstrate acceptable performance under a performance improvement plan. Under the draft proposal, MSPB is to apply a single standard of proof—the higher standard of preponderance of the evidence—to review adverse actions taken for either performance or conduct. On the other hand, while due process features, such as advance written notice of a proposed adverse action are still required, performance improvement plans are no longer required. As we understand the draft proposal, applying the same standard to both types of adverse actions could add more consistency to the appeals process. Also under Title 5, MSPB now reviews penalties during the course of a disciplinary action against an employee to ensure that the agency considered relevant prescribed factors and exercised management discretion within tolerable limits of reasonableness. MSPB may mitigate or modify a penalty if the agency did not consider prescribed factors. Under the draft proposal, MSPB will be able to mitigate a penalty only if it is totally unwarranted in light of all pertinent circumstances. This change would restrict MSPB’s ability to mitigate penalties. To help advance the discussion concerning how governmentwide human capital reform should proceed, GAO and the National Commission on the Public Service Implementation Initiative co-hosted a forum on whether there should be a governmentwide framework for human capital reform and, if so, what this framework should include. While there was widespread recognition among the forum participants that a one-size-fits- all approach to human capital management is not appropriate for the challenges and demands government faces, there was equally broad agreement that there should be a governmentwide framework to guide human capital reform. Further, a governmentwide framework should balance the need for consistency across the federal government with the desire for flexibility so that individual agencies can tailor human capital systems to best meet their needs. Striking this balance would not be easy to achieve, but is necessary to maintain a governmentwide system that is responsive enough to adapt to agencies’ diverse missions, cultures, and workforces. While there were divergent views among the forum participants, there was general agreement on a set of principles, criteria, and processes that would serve as a starting point for further discussion in developing a governmentwide framework in advancing human capital reform, as shown in figure 1. We believe that these principles, criteria, and processes provide an effective framework for Congress and other decision makers to use as they consider and craft governmentwide civil service reform proposals. Moving forward with human capital reform, in the short term, Congress should consider selected and targeted actions to continue to accelerate the momentum to make strategic human capital management the centerpiece of the government’s overall transformation effort. One option may be to provide agencies one-time, targeted investments that are not built into agencies’ bases for future year budget requests. For example, Congress established the Human Capital Performance Fund to reward agencies’ highest performing and most valuable employees. However, the draft proposal proposes to repeal the Human Capital Performance Fund. According to OPM, the provision was never implemented, due to lack of sufficient funding. We believe that a central fund has merit and can help agencies build the infrastructure that is necessary in order to implement a more market-based and performance-oriented pay system. To be eligible, agencies would submit plans for approval by OPM that incorporated features such as a link between pay for performance and the agency’s strategic plan, employee involvement, ongoing performance feedback, and effective safeguards to ensure fair management of the system. In the first year of implementation, up to 10 percent of the amount appropriated would be available to train those involved in making meaningful distinctions in performance. These features are similar to those cited in the draft proposal as the basis for OPM’s certification for agencies to implement their new pay and performance management systems. In addition, as agencies develop their pay for performance systems, they will need to consider the appropriate mix between pay awarded as base pay increases versus one-time cash increases, while still maintaining fiscally sustainable compensation systems that reward performance. A key question to consider is how the government can make an increasing percentage of federal compensation dependent on achieving individual and organizational results by, for example, providing more compensation as one-time cash bonuses rather than as permanent salary increases. However, agencies’ use of cash bonuses or other monetary incentives has an impact on employees’ retirement calculations since they are not included in calculating retirement benefits. Congress should consider potential legislative changes to allow cash bonuses that would otherwise be included as base pay increases to be calculated toward retirement and thrift savings benefits by specifically factoring bonuses into the employee’s basic pay for purposes of calculating the employee’s “high-3” for retirement benefits and making contributions to the thrift savings plan. As we continue to move forward with broader human capital reforms, they should be guided by a framework consisting of principles, criteria, and processes. While the reforms to date have recognized that the “one-size- fits-all” approach is not appropriate to all agencies’ demands, challenges, and missions, a reasonable degree of consistency across the government is still desirable. Striking this balance is not easy to achieve, but is necessary to maximize the federal government’s performance within available resources and assure accountability for the benefit of the American people. Chairman Porter, Representative Davis, and Members of the Subcommittee, this concludes my statement. I would be pleased to respond to any questions that you may have. For further information regarding this statement, please contact Lisa Shames, Acting Director, Strategic Issues, at (202) 512-6806 or shamesl@gao.gov. Individuals making key contributions to this statement include Anne Inserra, Carole Cimitile, Janice Latimer, Belva Martin, Jeffrey McDermott, and Katherine H. Walker. The federal government must have the capacity to plan more strategically, react more expeditiously, and focus on achieving results. Critical to the success of this transformation are the federal government’s people— its human capital. Yet, in many cases the federal government has not transformed how it classifies, compensates, develops, and motivates its employees to achieve maximum results within available resources and existing authorities. A key question is how to update the federal government’s compensation system to be market-based and more performance-oriented. GAO strongly supports the need to expand pay reform in the federal government. While implementing market-based and more performance- oriented pay systems is both doable and desirable, organizations’ experiences in designing and managing their pay systems underscored three key themes that can guide federal agencies’ efforts. The shift to market-based and more performance-oriented pay must be part of a broader strategy of change management and performance improvement initiatives. Market-based and more performance-oriented pay cannot be simply overlaid on most organizations’ existing performance management systems. Rather, as a precondition to effective pay reform, individual expectations must be clearly aligned with organizational results, communication on individual contributions to annual goals must be ongoing and two-way, meaningful distinctions in employee performance must be made, and cultural changes must be undertaken. To further the discussion of federal pay reform, GAO partnered with key human capital stakeholders to convene a symposium in March 2005 to discuss public, private, and nonprofit organizations’ successes and challenges in designing and managing market-based and more performance-oriented pay systems. organizations. Training and developing new and current staff to fill new roles and work in different ways will play a crucial part in building the capacity of the organizations. Organizations presenting at our symposium considered the following strategies in designing and managing their pay systems. 1. Focus on a set of values and objectives to guide the pay system. 2. Examine the value of employees’ total compensation to remain competitive in the market. This testimony presents the strategies that organizations considered in designing and managing market-based and more performance-oriented pay systems and describes how they are implementing them. 3. Build in safeguards to enhance the transparency and help ensure the fairness of pay decisions. 4. Devolve decision making on pay to appropriate levels. 5. Provide training on leadership, management, and interpersonal skills to facilitate effective communication. 6. Build consensus to gain ownership and acceptance for pay reforms. 7. Monitor and refine the implementation of the pay system. www.gao.gov/cgi-bin/getrpt?GAO-05-1048T. To view the full product, including the scope and methodology, click on the link above. For more information, contact Lisa Shames at (202) 512-6806 or shamesl@gao.gov. Moving forward, it is possible to enact broad-based reforms that would enable agencies to move to market-based and more performance-oriented pay systems. However, before implementing reform, each executive branch agency should demonstrate and the Office of Personnel Management should certify that the agency has the institutional infrastructure in place to help ensure that the pay reform is effectively and equally implemented. At a minimum, this infrastructure includes a modern, effective, credible, and validated performance management system in place that provides a clear linkage between institutional, unit, and individual performance-oriented outcomes; results in meaningful distinctions in ratings; and incorporates adequate safeguards. Critical to the success of the federal government’s transformation are its people— human capital. Yet the government has not transformed, in many cases, how it classifies, compensates, develops, and motivates its employees to achieve maximum results within available resources and existing authorities. One of the questions being addressed as the federal government transforms is how to update its compensation system to be more market based and performance oriented. While implementing market-based and more performance-oriented pay systems is both doable and desirable, organizations’ experiences show that the shift to market-based and more performance-oriented pay must be part of a broader strategy of change management and performance improvement initiatives. GAO identified the following key themes that highlight the leadership and management strategies these organizations collectively considered in designing and managing market-based and more performance- oriented pay systems. 1. Focus on a set of values and objectives to guide the pay system. Values represent an organization’s beliefs and boundaries and objectives articulate the strategy to implement the system. 2. Examine the value of employees’ total compensation to remain competitive in the market. Organizations consider a mix of base pay plus other monetary incentives, benefits, and deferred compensation, such as retirement pay, as part of a competitive compensation system. To further the discussion of federal pay reform, GAO, the U.S. Office of Personnel Management, the U.S. Merit Systems Protection Board, the National Academy of Public Administration, and the Partnership for Public Service convened a symposium on March 9, 2005, to discuss organizations’ experiences with market-based and more performance-oriented pay systems. Representatives from public, private, and nonprofit organizations made presentations on the successes and challenges they experienced in designing and managing their market-based and more performance-oriented pay systems. A cross section of human capital stakeholders was invited to further explore these successes and challenges and engage in open discussion. While participants were asked to review the overall substance and context of the draft summary, GAO did not seek consensus on the key themes and supporting examples. 3. Build in safeguards to enhance the transparency and ensure the fairness of pay decisions. Safeguards are the precondition to linking pay systems with employee knowledge, skills, and contributions to results. 4. Devolve decision making on pay to appropriate levels. When devolving such decision making, overall core processes help ensure reasonable consistency in how the system is implemented. 5. Provide training on leadership, management, and interpersonal skills to facilitate effective communication. Such skills as setting expectations, linking individual performance to organizational results, and giving and receiving feedback need renewed emphasis to make such systems succeed. 6. Build consensus to gain ownership and acceptance for pay reforms. Employee and stakeholder involvement needs to be meaningful and not pro forma. 7. Monitor and refine the implementation of the pay system. While changes are usually inevitable, listening to employee views and using metrics helps identify and correct problems over time. www.gao.gov/cgi-bin/getrpt?GAO-05-832SP. To view the full product, including the scope and methodology, click on the link above. For more information, contact J. Christopher Mihm at (202) 512-6806 or mihmj@gao.gov. These organizations found that the key challenge with implementing market- based and more performance-oriented pay is changing the culture. To begin to make this change, organizations need to build up their basic management capacity at every level of the organization. Transitioning to these pay systems is a huge undertaking and will require constant monitoring and refining in order to implement and sustain the reforms. There is a growing understanding that the federal government needs to fundamentally rethink its current approach to pay and to better link pay to individual and organizational performance. Federal agencies have been experimenting with pay for performance through the Office of Personnel Management’s (OPM) personnel demonstration projects. The demonstration projects took a variety of approaches to designing and implementing their pay for performance systems to meet the unique needs of their cultures and organizational structures, as shown in the table below. Demonstration Project Approaches to Implementing Pay for Performance Using competencies to evaluate employee performance. High-performing organizations use validated core competencies as a key part of evaluating individual contributions to organizational results. To this end, AcqDemo and NRL use core competencies for all positions. Other demonstration projects, such as NIST, DOC, and China Lake, use competencies based on the individual employee’s position. Translating employee performance ratings into pay increases and awards. Some projects, such as China Lake and NAVSEA’s Newport division, established predetermined pay increases, awards, or both depending on a given performance rating, while others, such as DOC and NIST, delegated the flexibility to individual pay pools to determine how ratings would translate into performance pay increases, awards, or both. The demonstration projects made some distinctions among employees’ performance. Considering current salary in making performance-based pay decisions. Several of the demonstration projects, such as AcqDemo and NRL, consider an employee’s current salary when making performance pay increases and award decisions to make a better match between an employee’s compensation and contribution to the organization. Managing costs of the pay for performance system. According to officials, salaries, training, and automation and data systems were the major cost drivers of implementing their pay for performance systems. The demonstration projects used a number of approaches to manage the costs. Providing information to employees about the results of performance appraisal and pay decisions. To ensure fairness and safeguard against abuse, performance-based pay programs should have adequate safeguards, including reasonable transparency in connection with the results of the performance management process. To this end, several of the demonstration projects publish information, such as the average performance rating, performance pay increase, and award. www.gao.gov/cgi-bin/getrpt?GAO-04-83. To view the full product, including the scope and methodology, click on the link above. For more information, contact J. Christopher Mihm at (202) 512-6806 or mihmj@gao.gov. GAO strongly supports the need to expand pay for performance in the federal government. How it is done, when it is done, and the basis on which it is done can make all the difference in whether such efforts are successful. High-performing organizations continuously review and revise their performance management systems. These demonstration projects show an understanding that how to better link pay to performance is very much a work in progress at the federal level. Additional work is needed to strengthen efforts to ensure that performance management systems are tools to help them manage on a day-to-day basis. In particular, there are opportunities to use organizationwide competencies to evaluate employee performance that reinforce behaviors and actions that support the organization's mission, translate employee performance so that managers make meaningful distinctions between top and poor performers with objective and fact-based information, and provide information to employees about the results of the performance appraisals and pay decisions to ensure reasonable transparency and appropriate accountability mechanisms are in place. The federal government is in a period of profound transition and faces an array of challenges and opportunities to enhance performance, ensure accountability, and position the nation for the future. High- performing organizations have found that to successfully transform themselves, they must often fundamentally change their cultures so that they are more results-oriented, customer-focused, and collaborative in nature. To foster such cultures, these organizations recognize that an effective performance management system can be a strategic tool to drive internal change and achieve desired results. Public sector organizations both in the United States and abroad have implemented a selected, generally consistent set of key practices for effective performance management that collectively create a clear linkage— “line of sight”—between individual performance and organizational success. These key practices include the following. 1. Align individual performance expectations with organizational goals. An explicit alignment helps individuals see the connection between their daily activities and organizational goals. 2. Connect performance expectations to crosscutting goals. Placing an emphasis on collaboration, interaction, and teamwork across organizational boundaries helps strengthen accountability for results. 3. Provide and routinely use performance information to track organizational priorities. Individuals use performance information to manage during the year, identify performance gaps, and pinpoint improvement opportunities. Based on previously issued reports on public sector organizations’ approaches to reinforce individual accountability for results, GAO identified key practices that federal agencies can consider as they develop modern, effective, and credible performance management systems. 4. Require follow-up actions to address organizational priorities. By requiring and tracking follow-up actions on performance gaps, organizations underscore the importance of holding individuals accountable for making progress on their priorities. 5. Use competencies to provide a fuller assessment of performance. Competencies define the skills and supporting behaviors that individuals need to effectively contribute to organizational results. 6. Link pay to individual and organizational performance. Pay, incentive, and reward systems that link employee knowledge, skills, and contributions to organizational results are based on valid, reliable, and transparent performance management systems with adequate safeguards. 7. Make meaningful distinctions in performance. Effective performance management systems strive to provide candid and constructive feedback and the necessary objective information and documentation to reward top performers and deal with poor performers. 8. Involve employees and stakeholders to gain ownership of performance management systems. Early and direct involvement helps increase employees’ and stakeholders’ understanding and ownership of the system and belief in its fairness. www.gao.gov/cgi-bin/getrpt?GAO-03-488. To view the full report, including the scope and methodology, click on the link above. For more information, contact J. Christopher Mihm at (202) 512-6806 or mihmj@gao.gov. 9. Maintain continuity during transitions. Because cultural transformations take time, performance management systems reinforce accountability for change management and other organizational goals. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal government must have the capacity to plan more strategically, react more expeditiously, and focus on achieving results. Critical to the success of this transformation are the federal government's people--its human capital. We have commended the progress that has been made in addressing human capital challenges in the last few years. Still, significant opportunities exist to improve strategic human capital management to respond to current and emerging 21st century challenges. A key question, for example, is how to update the federal government's classification and compensation systems to be more market-based and performance-oriented. The Administration's draft proposed "Working for America Act" is intended to ensure that agencies are equipped to better manage, develop, and reward their employees. Under this proposal, the Office of Personnel Management (OPM) is to design a new core classification and pay system, among other things. In addition, the draft proposal amends some provisions of Title 5 covering labor management relations and adverse actions and appeals. This testimony presents preliminary observations on the draft proposal; presents the principles, criteria, and processes for human capital reform; and suggests next steps for selected and targeted actions. GAO supports moving forward with appropriate human capital reforms and believes that implementing more market-based and performance-oriented pay systems is both doable and desirable. Importantly, broad-based human capital reform must be part of a broader strategy of change management and performance improvement initiatives and cannot be simply overlaid on existing ineffective performance management systems. In addition, organizations need to build up their basic management capacity and must have adequate resources to properly design and effectively implement more market-based and performance-oriented systems. Before implementing dramatic human capital reforms, executive branch agencies should follow a phased approach that meets a "show me" test. That is, each agency should be authorized to implement a reform only after it has shown it has met certain conditions, including an assessment of its related institutional infrastructure and an independent certification by OPM that such infrastructure meets specified statutory standards. In any event, OPM's and agencies' related efforts should be monitored by Congress. Given the above, GAO has the following observations on the draft proposal. Congress should make pay and performance management reforms the first step in government-wide reforms. The draft proposal incorporates many of the key principles of more market-based and performance-oriented pay systems and requires that OPM certify that each agency's pay for performance system meets prescribed criteria. Going forward, OPM should define in regulation what it will take in terms of fact-based and data-driven analyses for agencies to demonstrate that they are ready to receive this certification and implement new authorities. OPM should play a key leadership and oversight role in helping individual agencies and the government as a whole work towards overcoming a broad range of human capital challenges. OPM's role would be expanded in several areas under the draft proposal. It is unclear whether OPM has the current capacity to discharge these new responsibilities. Congress should move more cautiously in connection with labor management relations and adverse actions and appeals reforms. Selected federal agencies have been implementing more market-based and performance-oriented pay systems for some time and thus they have built a body of experience and knowledge about what works well and what does not that allows the sharing of lessons learned. On the other hand, the federal government has had far less experience in changes regarding labor management relations and adverse actions and appeals. Congress may wish to monitor the Departments of Homeland Security's and Defense's implementation of related authorities, including lessons learned, before moving forward in these areas for the rest of the federal government.
The U.S. export control system is about managing risk; exports to some countries involve less risk than to other countries and exports of some items involve less risk than others. Under U.S. law, the President has the authority to control and require licenses for the export of items that may pose a national security or foreign policy concern. The President also has the authority to remove or revise those controls as U.S. concerns and interests change. In doing so, the President is not required under U.S. law to conduct a foreign availability analysis. In 1995, as a continuation of changes begun in the 1980s, the executive branch reviewed export controls on computer exports to determine how changes in computer technology and its military applications should affect U.S. export control regulations. In announcing its January 1996 change to HPC controls, the executive branch stated that one goal of the revised export controls was to permit the government to tailor control levels and licensing conditions to the national security or proliferation risk posed at a specific destination. A key element of the executive branch review of HPC export controls was a Stanford University study, jointly commissioned by the Commerce and Defense Departments. Among other things, the study was tasked to provide an assessment of the availability of HPCs in selected countries and the capabilities of those countries to use HPCs for military and other national security applications. The study concluded that (1) U.S.-manufactured computer technology between 4,000 and 5,000 millions of theoretical operations per second (MTOPS) was widely available and uncontrollable worldwide, (2) U.S.-manufactured computer technology up to 7,000 MTOPS would become widely available and uncontrollable worldwide by 1997, and (3) many HPC applications used in U.S. national security programs occur at about 7,000 MTOPS and at or above 10,000 MTOPS. The study also concluded that it would be too expensive for government and industry to effectively control the international diffusion of computing systems with performance below 7,000 MTOPS, and that attempts to control computer exports below this level would become increasingly ineffectual, would harm the credibility of export controls, and would unreasonably burden a vital sector of the computer industry. The study also raised concerns about the ability to control HPC exports in the future in light of advances in computing technology. The export control policy implemented in January 1996 removed license requirements for most HPC exports with performance levels up to 2,000 MTOPS—an increase from the previous level of 1,500 MTOPS. The policy also organized countries into four “computer tiers,” with each tier after tier 1 representing a successively higher level of concern to U.S. security interests. The policy placed no license requirements on tier 1 countries, primarily those in Western Europe and Japan. Exports of HPCs above 10,000 MTOPS to tier 2 countries in Asia, Africa, Latin America, and Central and Eastern Europe would continue to require licenses. A dual-control system was established for tier 3 countries, such as Russia and China. For these countries, HPCs up to 7,000 MTOPS could be exported to civilian end users without a license, while exports at and above 2,000 MTOPS to end users of concern for military or proliferation of weapons of mass destruction reasons required a license. Exports of HPCs above 7,000 MTOPS to civilian end users also required a license. HPC exports to terrorist countries in tier 4 were essentially prohibited. (See appendix II for details on the four-tier system of export controls.) The January 1996 regulation also made other changes. It specified that exporters would be responsible for (1) determining whether an export license is required, based on the MTOPS level of the computer; (2) screening end users and end uses for military or proliferation concerns; and (3) keeping records and reporting on exports of computers with performance levels of 2,000 MTOPS. In addition to the standard record-keeping requirements, the regulation added requirements for the date of the shipment, the name and address of the end user and of each intermediate consignee, and the end use of each exported computer. The Fiscal Year 1998 National Defense Authorization Act (P.L. 105-85) modified the policy for determining whether an individual license is required and now requires exporters to notify the Commerce Department of any planned sales of computers with performance levels greater than 2,000 MTOPS to tier 3 countries. The government has 10 days to assess and object to a proposed HPC sale. The law also now requires Commerce to perform post-shipment verifications (PSV) on all HPC exports with performance levels over 2,000 MTOPS to tier 3 countries. The Commerce Department promulgated regulations implementing the law on February 3, 1998. The Stanford study, used as a key element by the executive branch in its decision to revise HPC export controls, had significant limitations. It lacked empirical evidence or analysis regarding its conclusion that HPCs were uncontrollable and, although tasked with doing so, it did not assess the capabilities of countries of concern to use HPCs for military and other national security applications. The study itself identified as a major limitation, its inability to assess capabilities of countries of concern to use HPCs for their military programs or national security applications, on the basis that such information was not available, and recommended that such an assessment be done. The study noted that trends in HPC technology development could affect U.S. security and the ability to control HPC exports in the future and need to be further studied. Despite the study’s limitations, the executive branch decided to relax HPC export controls. The Stanford study accumulated information from computer companies on U.S. HPC market characteristics and concluded—without empirical evidence or analysis—that computers between 4,000 and 5,000 MTOPS were already available worldwide and uncontrollable and that computers at about 7,000 MTOPS would be widely available and uncontrollable by 1997.Using the findings from the Stanford study, executive branch officials set the computer performance control thresholds for each tier. However, these officials could not explain nor provide documentation as to how the executive branch arrived at the decision to set the license requirements for exports of HPCs to tier 3 countries for military or proliferation end users at 2,000 MTOPS, even though the study concluded that computing power below 4,000 or 5,000 MTOPS was already “uncontrollable.” The study identified the following six factors as affecting controllability of HPCs: computer power, ease of upgrading, physical size, numbers of units manufactured and sold, sources of sales (direct sales or through resellers), and the cost of entry level systems. It described uncontrollability as the relationship between the difficulty of controlling computers and the willingness of government and industry to meet the costs of tracking and controlling them. The study asserted that as U.S. HPCs were sold openly for 2 years, their export would become uncontrollable. Part of the study’s rationale was that, as older HPCs are replaced by newer models 2 years after product introduction, original vendors may no longer have information on where replaced HPCs are relocated. The study also presumed a level of “leakage” of computers to countries of concern from U.S. HPC sources and asserted that costs of controlling such leakage were no longer tolerable. However, the study did not attempt to calculate or specify those costs. In addition, the study suggested only vague thresholds for these six factors to determine “uncontrollability.” For example, it noted that the threshold at which it becomes difficult to track numbers of units could vary from 200 to several thousand. The study did not provide analysis or empirical evidence to support its assumptions or conclusions. Although the Stanford study was tasked with assessing the capabilities of countries of concern to use HPCs for military and other national security applications, it did not do so. The study discussed only U.S. applications of HPCs for military purposes. According to the study’s principal author, data on other countries’ use of HPCs for military and other national security purposes was insufficient to make such assessments because the U.S. government does not gather such data in a systematic fashion. The report recommended that such an analysis be done. Despite the study’s limitations and recommendations to gather better data in the future on other countries’ use of HPCs for military and other national security purposes, the executive branch raised the MTOPS thresholds for HPC export controls and established the four-tier export control structure. The former Deputy Assistant Secretary of Defense for Counterproliferation Policy explained that because DOD was not tasked to conduct a threat assessment, it did not do so. Instead, the executive branch assessed countries on the basis of six criteria and assigned them to a particular tier. The six criteria were (1) evidence of on-going programs of national security concern, including proliferation of weapons of mass destruction with associated delivery systems and regional stability and conventional threats; (2) membership in or adherence to non-proliferation and export control regimes; (3) an effective export control system, including enforcement and compliance programs and an associated assessment of diversion risks; (4) overall relations with the United States; (5) whether U.N. sanctions had been imposed; and (6) prior licensing history. Prior to the executive branch’s decision to change computer thresholds, scientists at Department of Energy (DOE) national laboratories and other U.S. government officials had accumulated information to show how countries of concern could use HPCs to facilitate the design of nuclear weapons and to improve advanced nuclear weapons in the absence of tests of nuclear explosives. However, this information was not used as part of the decisionmaking process for revising HPC export controls, according to the Commerce Department. In December 1997 the House Committee on National Security directed the DOE and DOD to assess the national security impacts of exporting HPCs with performance levels between 2,000 and 7,000 MTOPS to tier 3 countries. In June 1998, 2 and 1/2 years after the executive branch revised HPC export controls, DOE concluded its study on how countries like China, India, and Pakistan can use HPCs to improve their nuclear programs. According to the DOE study, the impact of HPC acquisition depends on the complexity of the weapon being developed and, even more importantly, on the availability of high-quality, relevant test data. The study concluded that “the acquisition and application of HPCs to nuclear weapons development would have the greatest potential impact on the Chinese nuclear program—particularly in the event of a ban on all nuclear weapons testing.” Also, the study indicated that India and Pakistan may now be able to make better use of HPCs in the 1,000 to 4,000 MTOPS range for their nuclear weapons programs because of the testing data they acquired in May 1998 from underground detonations of nuclear devices. The potential contribution to the Russian nuclear program is less significant because of its robust nuclear testing experience, but HPCs can make a contribution to Russia’s confidence in the reliability of its nuclear stockpile. An emerging nuclear state is likely to be able to produce only rudimentary nuclear weapons of comparatively simple designs, for which personal computers are adequate. We were told that DOD’s study of national security impacts had not been completed as of September 1, 1998, in part because the Department had not received requested information from the Commerce Department until after July 1. The Stanford study noted that trends in HPC technology development may pose security and export control challenges and recommended further study to determine their implications for national security and export controls. The technology trends of concern include other countries’ ability (1) to upgrade the performance of individual computers and (2) to link individual computers to achieve higher performance levels. The Stanford study team reviewed the computer industry’s technological advances in parallel processing and concluded that such advances as “scalability” and “clustering” contributed to the uncontrollability of high performance computing worldwide and are inevitably reducing the effectiveness of U.S. export controls. “Scalability” refers to the capability to increase computer performance levels of a system by adding processor boards or by acquiring increasingly powerful microprocessors. “Clustering” refers to connecting many personal computers or workstations to achieve higher computing performance in a network of interconnected systems, working cooperatively and concurrently on one or several tasks. Scalability and clustering offer opportunities to increase computer power without the need to develop custom-built single processors traditionally used in HPCs. Some types of HPCs are designed today to allow scalability without the need for vendor support or even knowledge. As a result, some HPCs could be exported below MTOPS thresholds without an individual license, and, in theory, later covertly scaled up to levels that exceed the threshold. We asked government agencies for information about diversions and violations of U.S. HPC export controls, but they provided no evidence that countries of concern have increased the computing power of U.S. exported machines in violation of export restrictions. We found no U.S. government reviews of alternatives to address these security concerns, although authors of the Stanford study and others with whom we spoke identified various options that could be assessed. These include (1) requiring government review and consideration of machines at their highest scalable MTOPS performance levels and (2) requiring that HPCs exported to tier 3 countries be physically modified to prevent upgrades beyond the allowed levels. The executive branch’s January 1996 export control revision (1) increased thresholds for requiring licenses, which resulted in a reduction in the numbers of licensed HPCs; (2) shifted some of the government’s end-use screening responsibility from the government to the computer industry, until this policy was revised in 1998; and (3) required HPC manufacturers to keep records of the end users of their HPC exports. The government continued to have responsibility for post-shipment verifications for HPCs, which have reduced value as traditionally conducted. Since the export controls for computers were revised in 1996, HPC export license applications have declined from 459 applications in fiscal year 1995 to 125 applications in fiscal year 1997. In fiscal year 1995, the Commerce Department approved 395 license applications for HPC exports, and denied 1. In fiscal year 1997, Commerce approved 42 license applications for HPC exports, and denied 6. The remainder of the applications in each year were withdrawn without action. Changes in the numbers of both licensed and unlicensed exports might not be attributed entirely to the change in export controls. However, we did note some characteristics of U.S. HPC exports since the revision. For example, while HPC exports increased to each tier from January 1996 through September 1997, 72 percent of machines were sold to tier 1 countries. Also during this period, 77 HPCs were exported to China and 19 were exported to India, all without individual licenses. Most U.S. HPCs exported in this period (about 85 percent) had performance levels between 2,000 and 5,000 MTOPS. (See appendix III for details on HPC exports.) The executive branch shifted some government oversight responsibility to the computer industry, especially for tier 3 countries. Exporters became responsible for determining whether exports required a license by screening end users and end uses for military or proliferation concerns (end-use screening). However, some industry and government officials concluded that the computer industry lacked the necessary information to distinguish between military and civilian end users in some tier 3 countries—particularly China. Because of concerns about U.S. HPCs being obtained by countries of proliferation concern for possible use in weapons-related activities, the Congress enacted a provision in Public Law 105-85 that required exporters to notify the Commerce Department of all proposed HPC sales over 2,000 MTOPS to tier 3 countries. The law gives the government an opportunity to assess these exports within 10 days to determine the need for a license and it can use information that may not be available to the exporter. Pursuant to the Export Administration Regulations, exporters are required to keep accurate records of each licensed and unlicensed export of a computer over 2,000 MTOPS to any destination. These records are to include names and addresses of each end user and each “intermediate consignee” (resellers or distributors). Exporters must also provide quarterly reports to Commerce on license-exempt exports—almost 96 percent of the total HPC exports in the past 2 years. The government relies on the exporters’ data for end-use information, but we found that companies had reported inconsistent and incomplete data for intermediate consignees (resellers or distributors) as end users. For example, one company reported data for only one intermediate consignee, even though company officials told us that the company uses multiple resellers. Company officials noted that the company sells computers to companies in other countries, which then sell the computers to other, unknown end users. A second company provided “end-use statements” from its resellers, rather than the actual end users, and identified computers’ end use for several overseas sales as “resale.” In contrast, a third company shows its resellers as resellers, rather than as end users. Company officials said that the company contractually requires its resellers to identify and provide end-use statements from the ultimate end-users. The revision of HPC export controls did not reduce the government’s responsibility for certain safeguards procedures, notably conducting PSVs. Under current law, Commerce is required to conduct PSVs for all HPC exports over 2,000 MTOPS to tier 3 countries. While PSVs are important for detecting and deterring physical diversions of HPCs, PSVs, as traditionally conducted, do not verify computer end use. Also, some countries do not allow the United States to conduct them. China, for example, had not allowed PSVs, but in June 1998, it reportedly agreed to do so. U.S. government officials agreed that the way PSVs of computers have been traditionally conducted have reduced their value because such PSVs establish only the physical presence of an HPC. However, this step assures the U.S. government that the computer has not been physically diverted. According to DOE laboratory officials, it is easy to conceal how a computer is being used. They believed that the U.S. government officials performing the verifications cannot make such a determination, partly because they have received no computer-specific training. Although it is possible to verify how an HPC is being used through such actions as reviewing internal computer data, this would be costly and intrusive, and require experts’ sophisticated computer analysis. Another limitation of PSVs concerns sovereignty issues. Host governments in some countries of greatest concern, notably China, have precluded or restricted the U.S. government’s ability to conduct PSVs. Three European countries that we visited—United Kingdom, Germany, and France—also do not allow U.S. government officials to do PSVs. However, they perform the checks themselves and provide the results to the U.S. government. The government makes limited efforts to monitor exporters’ and end users’ compliance with explicit conditions attached to export licenses. It relies largely on HPC exporters for end use monitoring and may require them or the end users to safeguard the exports by limiting access to the computers or inspecting computer logs and outputs. The end user may also be required to agree to on-site inspections, even on short notice, by U.S. government or exporting company officials, who would review programs and software used on the computer, or to remote electronic monitoring of the computer. Commerce officials stated that they may have reviewed computer logs in the past, but do not do so anymore, and said that they have not conducted any short notice visits, and that they do not do remote monitoring. They said that, ultimately, monitoring safeguards plans is the exporter’s responsibility. As requested, we evaluated the current foreign availability of HPCs. Using the EAA’s general description of foreign availability as our criteria, our analysis showed that subsidiaries of U.S. companies dominate the overseas sales of HPCs. These companies primarily compete against one another with limited competition from foreign suppliers in Japan and Germany. We also obtained information on the capability of certain tier 3 countries to build their own HPCs and found it to be limited in the capability to produce machines in comparable quantity, quality, and power as the major HPC-supplier countries. The EAA describes foreign availability as goods or technology available without restriction to controlled destinations from sources outside the United States in sufficient quantities and comparable quality to those produced in the United States so as to render the controls ineffective in achieving their purposes. We found that the only global competitors for general computer technology are three Japanese companies, two of which compete primarily for sales of high-end computers—systems sold in small volumes and performing at advanced levels. Two of the companies reported no HPC exports to tier 3 countries, while the third company reported some exports on a regional, rather than country, basis. One German company sells HPCs primarily in Europe and has reported several sales of its HPCs over 2,000 MTOPS to tier 3 countries. One British company said it is capable of producing HPCs above 2,000 MTOPS, but company officials said it has never sold a system outside the European Union. A 1995 Commerce Department study of the HPC global market showed that American dominance had prevailed at that time, as well. The study observed that American HPC manufacturers controlled the market worldwide, followed by Japanese companies. It also found that European companies controlled about 30 percent of the European market and were not competitive outside Europe. The other countries that are HPC suppliers to countries outside Europe also restrict their exports. The United States and Japan since 1984 have been parties to a bilateral arrangement, referred to as the “Supercomputer Regime,” to coordinate their export controls on HPCs. Also, both Japanand Germany, like the United States, are signatories to the Wassenaar Arrangement, which has membership criteria of adherence to non-proliferation regimes and effective export controls. Each country also has national regulations that generally appear to afford levels of protection similar to U.S. regulations for their own and for U.S.-licensed HPCs. For example, both countries place export controls on sales of computers over 2,000 MTOPS to specified destinations, according to German and Japanese government officials. However, foreign government officials said that they do not enforce U.S. reexport controls on unlicensed U.S. HPCs. In fact, a study of German export controls noted that regulatory provisions specify that Germany has no special provisions on reexport of U.S.-origin goods. According to German government officials, the exporter is responsible for knowing the reexport requirements of the HPC’s country of origin. We could not ascertain whether improper reexports of HPCs occurred from tier 1 countries. Because some U.S. government and HPC industry officials consider indigenous capability to build HPCs a form of foreign availability, we examined such capabilities for tier 3 countries. Available information indicates that the capabilities of China, India, and Russia to build their own HPCs still lag well behind that of the United States, Japan, and European countries. Although details are not well-known about HPC developments in each of these tier 3 countries, most officials and studies showed that each country still produces machines in small quantities and of lower quality and power compared to U.S., Japanese, and European computers. For example, China has produced at least two different types of HPCs, called the Galaxy and Dawning series, based on U.S. technology and they are believed to have a performance level of about 2,500 MTOPS. Although China has announced its latest Galaxy at 13,000 MTOPS, U.S. government officials have no confirmation of this report. India has produced a series of computers called Param, which are based on U.S. microprocessors and are believed by U.S. DOE officials to be rated at about 2,000 MTOPS. These officials were denied access to test the computer’s performance. Russia’s efforts over the past three decades to develop commercially viable HPCs have used both indigenously-developed and U.S. microprocessors, but have suffered from economic problems and a lack of customers. According to one DOE official, Russia has never built a computer running better than 2,000 MTOPS, and various observers believe Russia to be 3 to 10 years behind the West in developing computers. A key element in the 1996 decision to revise HPC export controls was the findings of the Stanford study which did not have adequate analyses of critical issues. In particular, the study used to justify the decision did not assemble empirical data or analysis to support the conclusion that HPCs below specific performance levels were uncontrollable and widely available worldwide. Moreover, the study did not analyze the capabilities of countries of concern to use HPCs to further their military programs or engage in nuclear proliferation, but rather recommended that such data be gathered and such analysis be made. Despite the limitations of the study, the executive branch revised the HPC export controls. Since the executive branch’s stated goals for the revised HPC export controls included tailoring control levels to security and proliferation risks of specific destinations, it becomes a vital factor to determine how and at what performance levels specific countries would use HPCs for military and other national security applications and how such uses would threaten U.S. national security interests in specific areas. In addition, the Stanford study identified trends in HPC technology development which may pose security and export control challenges for national security and export controls. Some alternatives to address these security challenges have been identified by authors of the Stanford study and others with whom we spoke, and could be assessed. To complement the studies undertaken by DOD and DOE for the House Committee on National Security, we recommend that the Secretary of Defense assess and report on the national security threat and proliferation impact of U.S. exports of HPCs to countries of national security and proliferation concern. This assessment, at a minimum, should address (1) how and at what performance levels countries of concern use HPCs for military modernization and proliferation activities; (2) the threat of such uses to U.S. national security interests; and (3) the extent to which such HPCs are controllable. We also recommend that the Secretary of Commerce, with the support of the Secretaries of Defense, Energy, and State, and the Director of the U.S. Arms Control and Disarmament Agency, jointly evaluate and report on options to safeguard U.S. national security interests regarding HPCs. Such options should include, but not be limited to, (1) requiring government review and control of the export of computers at their highest scalable MTOPS performance levels and (2) requiring that HPCs destined for tier 3 countries be physically modified to prevent upgrades beyond the allowed levels. Commerce and DOD each provided one set of general written comments on a draft of this and a companion report and the Departments of State and Energy and the Arms Control and Disarmament Agency provided oral comments. Commerce, Defense, and State raised issues about various matters discussed in the report. The Department of Energy had no comments on the report but said it deferred to Commerce and Defense to comment on the Stanford study. The Arms Control and Disarmament Agency agreed with the substance of the report. Commerce, State, Energy, and the Arms Control and Disarmament Agency did not comment on our recommendations, but Defense did. Defense said that our recommendation concerning the assessment of national security threats and proliferation impact of U.S. exports to countries of concern was done in connection with the 1995 decision to revise HPC export controls, and that it would consider additional options to safeguard exports of HPCs as part of its ongoing review of export controls. As noted below, we believe the question of how countries of concern could use HPCs to further their military and nuclear programs was not addressed as part of the executive branch’s 1995 decision. Commerce commented that the President’s decision was intended to change the computer export policy from what it referred to as “a relic of the Cold War to one more in tune with today’s technology and international security environment.” Commerce said the decision was based on (1) rapid technological changes in the computer industry, (2) wide availability, (3) limited controllability, and (4) limited national security applications for HPCs. Commerce provided additional views about each of these factors. Commerce commented that our report focused on how countries might use HPCs for proliferation or military purposes and on what it called an outdated Cold War concept of “foreign availability,” rather than these factors. Our report specifically addresses the four factors Commerce said it considered in 1995. These four factors are considered in the Stanford University study upon which the executive branch heavily relied in making its decision to revise HPC export controls. Our report agreed with the study’s treatment of technological changes in the computing industry and that advances in computing technology may pose long-term security and controllability challenges. Commerce commented that our analysis of foreign availability as an element of the controllability of HPCs was too narrow, stating that foreign availability is not an adequate measure of the problem. Commerce stated that this “Cold War concept” makes little sense today, given the permeability and increased globalization of markets. We agree that rapid technological advancements in the computer industry have made the controllability of HPC exports a more difficult problem; however, we disagree that foreign availability is an outdated Cold War concept that has no relevance in today’s environment. While threats to U.S. security may have changed, they have not been eliminated. Commerce itself recognized this in its March 1998 annual report to the Congress which stated that “the key to effective export controls is setting control levels above foreign availability.” Moreover, the concept of foreign availability, as opposed to Commerce’s notion of “worldwide” availability, is still described in EAA and the Export Administration Regulations as a factor to be considered in export control policy. Commerce also commented that the need to control the export of HPCs because of their importance for national security applications is limited. It stated that many national security applications can be performed satisfactorily on uncontrollable low-level technology, and that computers are not a “choke point” for military production. Commerce said that having access to HPCs alone will not improve a country’s military-industrial capabilities. Commerce asserted that the 1995 decision was based on research leading to the conclusion that computing power is a secondary consideration for many applications of national security concern. We asked Commerce for its research evidence, but none was forthcoming. The only evidence that Commerce cited was contained in the Stanford study. Moreover, Commerce’s position on this matter is not consistent with that of DOD. DOD, in its Militarily Critical Technologies List, has determined that high performance computing is an enabling technology for modern tactical and strategic warfare and is also important in the development, deployment, and use of weapons of mass destruction. High performance computing has also played a major role in the ability of the United States to maintain and increase the technological superiority of its war-fighting support systems. DOD has noted in its High Performance Computing Modernization Program annual plan that the use of HPC technology has led to lower costs for system deployment and improved the effectiveness of complex weapons systems. DOD further stated that as it transitions its weapons system design and test process to rely more heavily on modeling and simulation, the nation can expect many more examples of the profound effects that the HPC capability has on both military and civilian applications. Furthermore, we note that the concept of “choke point” is not a standard established in U.S. law or regulation for reviewing dual-use exports to sensitive end users for proliferation reasons. In its comments, DOD said that the Stanford study was just one of many sources of information and analysis used in the 1996 executive branch decision. We reviewed all of the four sources of information identified to us by DOD, DOE, State, Commerce, and Arms Control and Disarmament Agency (ACDA) officials as contributing to their assessment of computer export controls. However, the Stanford study was a key analytical study used in the decision-making process and the only source whose findings were consistently and repeatedly cited by the executive branch in official announcements, briefings, congressional testimony, and discussions with us in support of the HPC export control revision. In its comments, DOD stated that our report inaccurately characterized DOD as not considering the threats associated with HPC exports. DOD said that in 1995 it “considered” the security risks associated with the export of HPCs to countries of national security and proliferation concern. What our report actually states is that (1) the Stanford study did not assess the capabilities of countries of concern to use HPCs for military and other national security applications, as required by its tasking and (2) the executive branch did not undertake a threat analysis of providing HPCs to countries of concern. DOD provided no new documentation to demonstrate how it “considered” these risks. As the principal author of the Stanford study and DOD officials stated during our review, no threat assessment or assessment of the national security impact of allowing HPCs to go to particular countries of concern and of what military advantages such countries could achieve had been done in 1995. In fact, the April 1998 Stanford study on HPC export controls by the same principal author also noted that identifying which countries could use HPCs to pursue which military applications remained a critical issue on which the executive branch provided little information. In its comments, the Department of State disagreed with our presenting combined data on HPC exports to China and Hong Kong in appendix III because the U.S.-Hong Kong Policy Act of 1992 calls for the U.S. government to treat Hong Kong as a separate territory regarding economic and trade matters. While, in principle, we do not disagree with State, it should be noted that we reported in May 1997 that given the decision to continue current U.S. policy toward Hong Kong, monitoring various indicators of Hong Kong’s continued autonomy in export controls becomes critical to assessing the risk to U.S. nonproliferation interests.Our presentation of the combined HPC export data for China and Hong Kong is intended to help illustrate a potential risk to U.S. nonproliferation interests regarding HPCs should Hong Kong’s continued autonomy in export controls be weakened. We believe that monitoring data on HPC exports to Hong Kong becomes all the more important since Hong Kong is treated as a tier 2 country, whereas China is a tier 3 country. Commerce also provided technical comments which we have incorporated as appropriate. Commerce and DOD written comments are reprinted in appendixes IV and V, respectively, along with our evaluation of them. ACDA provided oral comments on this report and generally agreed with it. However, it disagreed with the statement that “according to the Commerce Department, the key to effective export controls is setting control levels above the level of foreign availability of materials of concern.” ACDA stressed that this is Commerce’s position only and not the view of the entire executive branch. ACDA said that in its view (1) it is difficult to determine the foreign availability of HPCs and (2) the United States helps create foreign availability through the transfer of computers and computer parts. Our scope and methodology are in appendix I. Appendix II contains details on the four-tier system of export controls and appendix III shows characteristics of HPC exports since the revision. We conducted our review between August 1997 and June 1998 in accordance with generally accepted government auditing standards. We will provide copies of this report to other congressional committees; the Secretaries of Commerce, Defense, Energy, and State; the Director, U.S. Arms Control and Disarmament Agency; and the Director, Office of Management and Budget. Copies will be provided to others upon request. Please contact me on (202) 512-4128 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix IV. To assess the basis for the U.S. government’s 1996 decision to change HPC controls, we reviewed a 1995 Stanford University study on high performance computing and export control policy commissioned by the Commerce and Defense Departments and evaluated the executive branch’s assessment of national security risks of HPCs. We reviewed several classified charts and briefing slides prepared by the intelligence community and DOD that were identified as important support for the revision of controls. We also talked with the Stanford study’s principal authors to discuss their methodology, evidence, conclusions, and recommendations. In addition, we met with the Department of Defense (DOD), the Department of Energy (DOE), State and Commerce Department officials to discuss the interagency process used leading up to the decision to revise controls on HPCs. We also requested, but were denied access to, information from the National Security Council on data and analyses that were used in the interagency forum to reach the final decision to revise controls. To determine how the government assessed the national security risks of allowing the high performance computers (HPC) to be provided to countries of proliferation and military concern as part of the basis for the decision to revise the controls, we reviewed DOD and DOE documents on how HPCs are being used for nuclear and military applications. We discussed high performance computing for both U.S. and foreign nuclear weapons programs with DOE officials in Washington, D.C., and at the Lawrence Livermore, Los Alamos, and Sandia National Laboratories. We also met with officials of the DOD HPC Modernization Office and other officials within the Under Secretary of Defense Acquisition and Technology, Office of the Secretary of Defense, the Joint Chiefs of Staff, and the intelligence community to discuss how HPCs are being utilized for weapons design, testing and evaluation and other military applications. Furthermore, to understand the trends occurring in computer technology, we analyzed HPC model descriptions and technical means for increasing computing performance. To identify changes in licensing activities and the implementation of certain U.S. licensing and export enforcement requirements since the revision: We reviewed two sets of data from the Commerce Department, as noted above, in order to determine trends in American HPC exports since the 1996 revision of controls. We examined all U.S. high performance computer-related license applications worldwide. We analyzed the data for trends and changes in MTOPS levels of HPC exports before and after revision of controls, numbers of licenses approved, denied, and withdrawn without action, and HPC exports by countries and country tiers. We did not review the data for completeness, accuracy, and consistency. We reviewed the end user and end-use screening systems of major American HPC manufacturers, Commerce Department implementation of the revised regulations, and selected foreign government export controls in order to determine licensing changes affecting U.S. HPC exporters since the revision of controls. We also reviewed applicable U.S. laws and regulations governing HPC export licensing and enforcement and discussed these laws and regulations with Commerce Department officials. We obtained Commerce Department procedures on end use and end user determinations as well as records on HPC vendor inquiries to Commerce on end users. In addition, we reviewed information on intelligence community assessments of foreign end users receiving HPC exports. We also discussed end user and end use screening procedures with officials from major U.S. HPC manufacturers—Digital Equipment Corporation, Hewlett Packard/Convex, International Business Machines, and Sun Microsystems—at their corporate offices in the United States and sales offices overseas. We also visited representatives of these companies’ foreign subsidiary offices from China, Germany, Russia, Singapore, South Korea, and the United Kingdom to review end use screening procedures and documentation for selected exports. In addition, we visited selected HPC sites in China and Russia. However, the Chinese government refused us permission to visit one of three requested sites in Beijing. The Russian government, while not denying us permission to visit one site in-country, required an extended period of notification that went beyond our timeframes. Silicon Graphics, Inc./Cray refused to meet with us pending the outcome of an ongoing criminal investigation. We reviewed Commerce Department data on pre-license and post-shipment verification (PSV) checks on HPCs and related technology and safeguards security plans associated with HPC export licenses in order to examine affects of licensing changes on government oversight. We discussed the implementation and utility of these checks with officials of the U.S. government, American computer companies, and host governments in the countries we visited. To determine foreign availability of HPCs, we reviewed the Export Administration Act (EAA) and Export Administration Regulations for criteria and a description of the meaning of the term. We then reviewed market research data from an independent computer research organization. We also reviewed lists, brochures, and marketing information from major U.S. and foreign HPC manufacturers in France (Bull, SA), Germany (Siemens Nixdorf Informationssysteme AG and Parsytec Computer GmbH), and the United Kingdom (Quadrics Supercomputers World, Limited) and met with them to discuss their existing and projected product lines. We also obtained market data, as available, from three Japanese HPC manufacturers. Furthermore, we met with government officials in China, France, Germany, Singapore, South Korea, and the United Kingdom to discuss each country’s indigenous capability to produce HPCs. We also obtained information from the Japanese government on its export control policies. In addition, we obtained and analyzed from two Commerce Department databases (1) worldwide export licensing application data for fiscal years 1994-97 and (2) export data from computer exporters provided to the Department for all American HPC exports between January 1996 and October 1997. We also reviewed a 1995 Commerce Department study on the worldwide computer market to identify foreign competition in the HPC market prior to the export control revision. To identify similarities and differences between U.S. and foreign government HPC export controls, we discussed with officials of the U.S. embassies and host governments information on foreign government export controls for HPCs and the extent of cooperation between U.S. and host government authorities on investigations of export control violations and any HPC diversions of HPCs to sensitive end users. We also reviewed foreign government regulations, where available, and both foreign government and independent reports on each country’s export control system. Table II.1 and the description that follows summarize the terms of the revised export controls for HPCs and according to their MTOPS levels and destinations. The revised controls announced by the President divide into four country groups, as follows: Tier 1 (28 countries: Western Europe, Japan, Canada, Mexico, Australia, New Zealand). No prior government review (license exception) for all computers, but companies must keep records on higher performance shipments that will be provided to the U.S. government, as directed. Tier 2 (106 countries: Latin America, South Korea, Association of Southeast Asian Nations or ASEAN, Hungary, Poland, Czech Republic, Slovak Republic, Slovenia, South Africa). No prior government review (license exception) up to 10,000 MTOPS with record-keeping and reporting, as directed; individual license (requiring prior government review) above 10,000 MTOPS. Above 20,000 MTOPS, the government may require certain safeguards at the end-user location. Tier 3 (50 countries: India, Pakistan, all Middle East/Maghreb, the former Soviet Union, China, Vietnam, rest of Eastern Europe). No prior government review (license exception) up to 2,000 MTOPS. Individual license for military and proliferation-related end uses and end users and license exception for civil end users between 2,000 MTOPS and 7,000 MTOPS, with exporter record-keeping and reporting, as directed. Individual license for all end users above 7,000 MTOPS. Above 10,000 MTOPS, additional safeguards may be required at the end-user location. Tier 4 (7 countries: Iraq, Iran, Libya, North Korea, Cuba, Sudan, and Syria). Current policies continue to apply (i.e., virtual embargo on computer exports). For all these groups, reexport and retransfer provisions continue to apply. The government continues to implement the Enhanced Proliferation Control Initiative, which provides authority for the government to block exports of computers of any level in cases involving exports to end uses or end users of proliferation concern or risks of diversion to proliferation activities. Criminal as well as civil penalties apply to violators of the Initiative. HPC exports have increased significantly since the 1996 export control revision. Figure III.1 shows the numbers of U.S. HPCs exported to all tiers from fiscal year 1994 through fiscal year 1997. In fiscal year 1996, U.S. computer vendors exported almost twice as many HPCs as they had in fiscal years 1994 and 1995 together. In fiscal year 1997, U.S. exports of HPCs more than quadrupled the fiscal year 1996 level. Figure III.1 also shows that growth in export volume was strong for tier 1 countries. Although tier 2 growth remained ahead of tier 1 for the whole period, the greatest volume of U.S. exports has been with the tier 1 countries. Table III.1 shows the largest importers of U.S. HPCs. U.S. allies and friends remained the largest market for U.S. HPC exports, but tier 2 countries were the fastest growing market. Figure III.2 summarizes the share of U.S. HPC exports that each tier took in this period. Figure III.3 shows the top five customers for U.S. HPCs and the portion of the exports they received. Finally figure III.4 shows that most HPCs exported in the past 2 years were rated between 2,000 and 3,000 MTOPS. Since the January 1996 revision, 68 countries worldwide, out of 193 in the tier system, purchased 3,967 U.S. HPCs, as of September 1997. These machines represent a total HPC computing power, as calculated in MTOPS, of over 15 million MTOPS. Twenty-six countries lead the world as the dominant customers for U.S. HPCs. These countries purchased 91 percent of all HPCs sold worldwide. Together they purchased over 14 million MTOPS, representing 93 percent of the HPC computing power exported from the U.S. in the period. Table III.1 ranks the countries by the quantities of MTOPS they purchased. It also shows the number of HPCs they purchased. The countries that purchased the most machines also purchased relatively more powerful machines as rated by MTOPS. (See table III.1.) As table III.1 shows, tier 1 countries, mainly U.S. friends and allies, were by far the largest market for U.S. HPCs. Figure III.2 summarizes the share of U.S. HPC exports that each tier received in the past 2 years. Since the export controls were revised, HPCs have been sold to more countries, but 26 countries account for 91 percent of all U.S. HPCs sold worldwide. Not only have the Tier 1 countries dominated as U.S. HPC customers, five U.S. allies were the largest customers for U.S. HPCs: Germany, the United Kingdom, Japan, South Korea, and France. As figure III.3 shows, these five countries together received over 52 percent of the machines exported. These countries also bought the most powerful machines, purchasing 58.36 percent of the MTOPS exported in HPCs. The large majority of U.S. HPCs exported since the revision and the largest number of most powerful computers were sent to tier 1 and 2 countries. For example, 50, 5, and 1 HPCs with computing power greater than 13,000 MTOPS went to tiers 1, 2, and 3, respectively. Of the 50 countries in tier 3, five—China, Israel, Russia, India, and Saudi Arabia—account for about 84 percent of the computers exported to tier 3. Table III.2 shows the numbers of computers each country has received. HPCs to China and India were exported with no individual licenses. Russia and Saudi Arabia received 1 licensed HPC each, while Israel received 18 licensed machines. China, which ranks first in the number of HPCs received by a tier 3 country, would have received even higher numbers of HPCs if its HPC totals were combined with those of its Hong Kong Special Administrative Region. Hong Kong and China rank 13th and 14th, respectively, on the HPC purchasers’ list. (See table III.1) If Hong Kong and China were treated as one for purposes of U.S. export controls and statistics, the combined region would have purchased more machines than Italy, which ranked seventh in U.S. machines exported, and almost as many machines as Switzerland, which ranked sixth. The largest numbers of U.S. HPCs exported were less powerful HPCs. HPCs at the 2,000 to 3,000 MTOPS level made up the bulk of machines exported, about 58 percent of all HPC exports. HPCs at the 2,000 to 7,000 MTOPS level constitute the large majority of U.S. HPC exports, about 92 percent of all U.S. HPC exports, or 3,638 machines exported. The remaining 8 percent of HPC exports, 329 machines, were above 7,000 MTOPS. Figure III.4 shows these relationships. (See fig. III.4.) The following are GAO’s comments on the Department of Commerce letter dated August 7, 1998. Commerce provided one set of written comments for this report. We addressed Commerce’s general comments relevant to this report on page 15 and its specific comments below. 1. We have made the suggested changes, as appropriate. 2. Commerce also commented that a number of foreign manufacturers indigenously produce HPCs that compete with those of the United States. Evidence cited by Commerce concerning particular countries with HPC manufacturing capabilities came from studies that were conducted in 1995 and that did not address or use criteria related to “foreign availability.” As stated in our report, we gathered data from multiple government and computer industry sources to find companies in other countries that met the terms of foreign availability. We met with major U.S. HPC companies in the United States, as well as with their overseas subsidiaries in a number of countries we visited in 1998, to discuss foreign HPC manufacturers that the U.S. companies considered as providing foreign availability and competition. We found few. Throughout Europe and Asia, U.S. computer subsidiary officials stated that their competition is primarily other U.S. computer subsidiaries and, to a lesser extent, Japanese companies. Our information does not support Commerce’s position on all of these manufacturers. For example, our visit to government and commercial sources in Singapore indicated that the country does not now have the capabilities to produce HPCs. We asked Commerce to provide data to support its assertion on foreign manufacturers, but we received no documentary support. In addition, although requested, Commerce did not provide documentary evidence to confirm its asserted capabilities of India’s HPCs and uses. 3. Commerce stated that policy makers did not receive DOE information prior to the revision of the HPC controls in 1995 and, further, there is current disagreement within DOE over the contribution that HPCs make to nuclear programs in countries of concern. We agree that Commerce did not obtain available information on this issue from DOE laboratories, although such information was available and provided to us upon request. In addition, we found no dissent or qualification of views identified in DOE’s official study on this matter. 4. Commerce stated that worldwide availability of computers indicates that there is a large installed base of systems in the tens of thousands or even millions. Commerce further stated that license requirements will not prevent diversion of HPCs unless realistic control levels are set that can be enforced effectively. While we agree, in principle, that increasing numbers of HPCs makes controllability more difficult, a realistic assessment of when an item is “uncontrollable” would require an analysis of (1) actual data, (2) estimated costs of enforcing controls, and (3) pros and cons of alternatives—such as revised regulatory procedures—that might be considered to extend controls. Such an analysis was not done by the executive branch before its 1995 decision. In addition, Commerce provided no documentary evidence for its statement that there is a large installed base of HPCs in the millions. 5. Commerce stated that most European governments do not enforce U.S. export control restrictions on reexport of U.S.-supplied HPCs. We agree that at least those European governments that we visited (Germany and United Kingdom) hold this position. However, although requested, Commerce provided no evidence to support its statement that the government of the United Kingdom has instructed its exporters to ignore U.S. reexport controls. The following are GAO’s comments on the Department of Defense letter dated July 16, 1998. DOD provided one set of written comments for this report. We addressed DOD’s general comments relevant to this report on page 17. We address DOD’s specific comments below. 1. DOD stated that the Stanford study was only one of many inputs considered by the executive branch in its 1995 assessment of computer export controls. We agree, and our report states, that there were other inputs to the decision. However, officials at Commerce, DOD, State, DOE, and ACDA referred us to the Stanford study in explaining the basis for the executive branch decision to revise the controls. Moreover, in announcing the 1996 HPC export control changes, the executive branch highlighted two conclusions of its review: (1) U.S.-manufactured computer technology up to 7,000 MTOPS would become widely available worldwide by 1997 and (2) many HPC applications used in U.S. national security programs occur at or above 10,000 MTOPS. Both conclusions were based on information provided only in the Stanford study. Also, DOD provided briefing slides on the HPC export control revision to the House Committee on National Security dated October 17, 1995, using information drawn almost exclusively from the Stanford study. Finally, a March 1998 Commerce Department report on foreign policy export controls noted only one source—a new Stanford study—as part of a 1998 review of HPC export controls. 2. DOD stated that it identified numerous national security applications used by the United States that require various levels of computing power, which helped to establish the revised licensing policies. We agree, and our report discusses the fact that DOD identified how the U.S. government uses HPCs for national security applications. However, this misses the point because these applications did not refer to particular countries of concern. As we noted in our report, the principal author of the Stanford study and DOD officials said that they had not performed a threat assessment or analysis of other countries’ use of HPCs for military and other national security purposes. The current DOD analysis of how countries of concern can use HPCs is being done at the request of the House National Security Committee and might provide the information needed to perform our recommended assessment. 3. We disagree that the executive branch fulfilled the intent of our recommendations. Specifically, it did not have information on how and at what performance levels countries of concern, such as China, India, and Pakistan, use HPCs for military modernization and nonnuclear proliferation activities. Regarding the degree of controllability of computers, neither the Stanford study nor any of the other inputs used in the 1995 computer export control review provided any empirical evidence or analysis to support assertions that HPCs with certain performance levels are widely available and uncontrollable. In fact, the 1998 Stanford study recommends procedural export licensing changes that would make such HPCs controllable again. Hai Tran The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement and a congressional request, GAO reviewed concerns that U.S. national security interests may have been compromised by sales of unlicensed high performance computers (HPC) to China and Russia, focusing on: (1) the basis for the executive branch's revision of HPC export controls; (2) changes in licensing activities and the implementation of certain U.S. licensing and export enforcement requirements since the revision; and (3) the current foreign availability of HPCs, particularly for certain countries of national security concern. GAO noted that: (1) a Stanford University study on foreign availability of HPCs was a key element in the decision to revise HPC export controls; (2) however, GAO's analysis of the study showed that it had 2 significant limitations; (3) first, the study lacked empirical evidence or analysis to support its conclusion that HPCs were uncontrollable based on worldwide availability and insufficient resources to control them; (4) second, the study did not assess the capabilities of countries of concern to use HPCs for military and other national security applications; (5) the study's principal author said that U.S. government data were insufficient to make such an assessment, and the study recommended that better data be gathered so that such an analysis could be done in the future; (6) the executive branch did not undertake a threat analysis of providing HPCs to countries of concern, but raised the computing power thresholds for HPC export controls and established a four-tier control structure; (7) the 1996 revision to HPC export controls had three key consequences; (8) the number of computer export licenses issued declined from 395 in fiscal year 1995 to 42 in 1997; (9) U.S. HPC exporters were charged with responsibilities previously conducted by the government, including screening and reporting on the end use and end user of HPCs; (10) the regulation required HPC manufacturers to keep records of the end users of all their HPC exports over 2,000 million theoretical operations per second (MTOPS); (11) to date, information on these exports reported to the government has been incomplete; (12) responsibility for postshipment verification (PSV) checks remained with the government; (13) however, because of how PSVs for computers are implemented, their value is reduced because they verify the physical location of a HPC, but not how it is used; (14) subsidiaries of U.S. computer manufacturers dominate the overseas HPC market, and they must comply with U.S. controls; (15) three Japanese companies are global competitors of U.S. manufacturers, two of which told GAO that they had no sales to tier 3 countries such as Russia and China; (16) Japan, Germany and the United Kingdom each have export controls on HPCs similar to those of the United States, according to foreign government officials; (17) Russia, China, and India have developed HPCs, but the capabilities of their computers are believed to be limited; and (18) thus, GAO's analysis suggests that HPCs over 2,000 MTOPS are not readily available to tier 3 countries from foreign sources without restrictions.
To obtain a full funding grant agreement, a project must first progress through a local or regional review of alternatives, develop preliminary engineering plans, and obtain FTA’s approval for final design. TEA-21 requires that FTA evaluate projects against “project justification” and “local financial commitment” criteria contained in the act (see fig. 1). FTA assesses the project justification and technical merits of a project proposal by reviewing the project’s mobility improvements, environmental benefits, cost-effectiveness, and operating efficiencies. In assessing a project’s local financial commitment, FTA assesses the project’s finance plan for evidence of stable and dependable financing sources to construct, maintain, and operate the proposed system or extension. Although FTA’s evaluation requirements existed prior to TEA-21, the act requires FTA to (1) develop a rating for each criterion as well as an overall rating of “highly recommended,” “recommended,” or “not recommended” and use these evaluations and ratings in approving projects’ advancement toward obtaining grant agreements; and (2) issue regulations on the evaluation and rating process. TEA-21 also directs FTA to use these evaluations and ratings to decide which projects to recommend to the Congress for funding in a report due each February. These funding recommendations are also reflected in DOT’s annual budget proposal. In the annual appropriations act for DOT, the Congress specifies the amounts of funding for individual New Starts projects. Historically, federal capital funding for transit systems, including the New Starts program, has largely supported rail systems. Under TEA-21 the FTA Capital Program has been split 40 percent/40 percent/20 percent among New Starts, Rail Modernization, and Bus Capital grants. Although fixed- guideway bus projects are eligible under the New Starts program, relatively few bus-related projects are now being funded under this program. Although FTA has been faced with an impending transit budget crunch for several years, the agency is likely to end the TEA-21 authorization period with about $310 million in unused commitment authority if its proposed fiscal year 2003 budget is enacted. This will occur for several reasons. First, in fiscal year 2001, the Congress substantially increased FTA’s authority to commit future federal funding (referred to as contingent commitment authority). This allowed FTA to make an additional $500 million in future funding commitments. Without this action, FTA would have had insufficient commitment authority to fund all of the projects ready for a grant agreement. Second, to preserve commitment authority for future projects, FTA did not request any funding for preliminary engineering activities in the fiscal year 2002 and 2003 budget proposals. According to FTA, it had provided an average of $150 million a year for fiscal years 1998 through 2001 for projects’ preliminary engineering activities. Third, FTA took the following actions that had the effect of slowing the commitment of funds or making funds available for reallocation: FTA tightened its review of projects’ readiness and technical capacity. As a result, FTA recommended fewer projects for funding than expected for fiscal years 2002 and 2003. For example, only 2 of the 14 projects that FTA officials estimated last year would be ready for grant agreements are being proposed for funding commitments in fiscal year 2003. FTA increased its available commitment authority by $157 million by releasing amounts associated with a project in Los Angeles for which the federal funding commitment had been withdrawn. Although the New Starts program will likely have unused commitment authority through fiscal year 2003, the carry-over commitments from existing grant agreements that will need to be funded during the next authorization period are substantial. FTA expects to enter the period likely covered by the next authorization (fiscal years 2004 through 2009) with over $3 billion in outstanding New Starts grant commitments. In addition, FTA has identified five projects estimated to cost $2.8 billion that will likely be ready for grant agreements in the next 2 years. If these projects receive grant agreements and the total authorization for the next program is $6.1 billion—-the level authorized under TEA-21—most of those funds will be committed early in the authorization period, leaving numerous New Starts projects in the pipeline facing bleak federal funding possibilities. Some of the projects anticipated for the next authorization are so large they could have considerable impact on the overall New Starts program. For example, the New York Long Island Railroad East Side Access project may extend through multiple authorization periods. The current cost estimate for the East Side Access project is $4.4 billion, including a requested $2.2 billion in New Starts funds. By way of comparison, the East Side Access project would require about three times the total and three times the federal funding of the Bay Area Rapid Transit District Airport Extension project, which at about $1.5 billion was one of the largest projects under TEA-21. In order to manage the increasing demand for New Starts funding, several proposals have been made to limit the amount of New Starts funds that could be applied to a project, allowing more projects to receive funding. For instance, the President’s fiscal year 2002 budget recommended that federal New Starts funding be limited to 50 percent of project costs starting in fiscal year 2004. (Currently, New Starts funding—and all federal funding—is capped at 80 percent.) A 50 percent New Starts cap would, in part, reflect a pattern that has emerged in the program. Currently, few projects are asking for the maximum 80 percent federal New Starts share, and many have already significantly increased the local share in order to be competitive under the New Starts program. In the last 10 years, the New Starts share for projects with grant agreements has been averaging about 50 percent. In April 2002, we estimated that a 50 percent cap on the New Starts share for projects with signed full funding grant agreements would have reduced the federal commitments to these projects by $650 million. Federal highway funds such as Congestion Mitigation and Air Quality funds can still be used to bring the total federal funding up to 80 percent. However, because federal highway funds are controlled by the states, using these funds for transit projects necessarily requires state- transit district cooperation. The potential effect of changing the federal share is not known. Whether a larger local match for transit projects could discourage local planners from supporting transit is unknown, but local planners have expressed this concern. According to transit officials, some projects could accommodate a higher local match, but others would have to be modified, or even terminated. Another possibility is that transit agencies may look more aggressively for ways to contain project costs or search for lower cost transit options. With demand high for New Starts funds, a greater emphasis on lower cost options may help expand the benefits of federal funding for mass transit; Bus Rapid Transit shows promise in this area. Bus Rapid Transit involves coordinated improvements in a transit system’s infrastructure, equipment, operations, and technology that give preferential treatment to buses on urban roadways. Bus Rapid Transit is not a single type of transit system; rather, it encompasses a variety of approaches, including 1) using buses on exclusive busways; or 2) buses sharing HOV lanes with other vehicles; and 3) improving bus service on city arterial streets. Busways—special roadways designed for the exclusive use of buses—can be totally separate roadways or operate within highway rights-of-way separated from other traffic by barriers. Buses on HOV-lanes operate on limited-access highways designed for long-distance commuters. Bus Rapid Transit on Busways or HOV lanes is sometimes characterized by the addition of extensive park and ride facilities along with entrance and exit access for these lanes. Bus Rapid Transit systems using arterial streets may include lanes reserved for the exclusive use of buses and street enhancements that speed buses and improve service. During the review of Bus Rapid Transit systems that we completed last year, we found at least 17 cities in the United States were planning to incorporate aspects of Bus Rapid Transit into their operations. FTA has begun to support the Bus Rapid Transit concept and expand awareness of new ways to design and operate high capacity Bus Rapid Transit systems as an alternative to building Light Rail systems. Because Light Rail systems operate in both exclusive and shared right-of-way environments, the limits on their length and the frequency of service are stricter than heavy rail systems. Light Rail systems have gained popularity as a lower-cost option to heavy rail systems, and since 1980, Light Rail systems have opened in 13 cities. Our September 2001 report showed that all three types of Bus Rapid Transit systems generally had lower capital costs than Light Rail systems. On a per mile basis, the Bus Rapid Transit projects that we reviewed cost less on average to build than the Light Rail projects, on a per mile basis. We examined 20 Bus Rapid Transit lines and 18 Light Rail lines and found Bus Rapid Transit capital costs averaged $13.5 million per mile for busways, $9.0 million per mile for buses on HOV lanes, and $680,000 per mile for buses on city streets, when adjusted to 2000 dollars. For the 18 Light Rail lines, capital costs averaged about $34.8 million per mile, ranging from $12.4 million to $118.8 million per mile, when adjusted to 2000 dollars. On a capital cost per mile basis, the three different types of Bus Rapid Transit systems have average capital costs that are 39 percent, 26 percent, and 2 percent of the average cost of the Light Rail systems we reviewed. The higher capital costs per mile for Light Rail systems are attributable to several factors. First, the Light Rail systems contain elements not required in the Bus Rapid Transit systems, such as train signal, communications, and electrical power systems with overhead wires to deliver power to trains. Light Rail also requires additional materials needed for the guideway—rail, ties, and track ballast. In addition, if a Light Rail maintenance facility does not exist, one must be built and equipped. Finally, Light Rail vehicles, while having higher carrying capacity than most buses, also cost more—about $2.5 million each. In contrast, according to transit industry consultants, a typical 40-foot transit bus costs about $283,000, and a higher-capacity bus costs about $420,000. However, buses that incorporate newer technologies for low emissions or that run on more than one fuel can cost more than $1 million each. We also analyzed operating costs for six cities that operated both Light Rail and some form of Bus Rapid Transit service. Whether Bus Rapid Transit or Light Rail had lower operating costs varied considerably from city to city and depended on what cost measure was used. In general, we did not find a systematic advantage for one mode over the other on operating costs. The performance of the Bus Rapid Transit and Light Rail systems can be comparable. For example, in the six cities we reviewed that had both types of service, Bus Rapid Transit generally operated at higher speeds. In addition, the capacity of Bus Rapid Transit systems can be substantial; we did not see Light Rail having a significant capacity advantage over Bus Rapid Transit. For example, the highest ridership we found on a Light Rail line was on the Los Angeles Blue Line, with 57,000 riders per day. The highest Bus Rapid Transit ridership was also in Los Angeles on the Wilshire-Whittier line, with 56,000 riders per day. Most Light Rail lines in the United States carry about half the Los Angeles Blue Line ridership. Bus Rapid Transit and Light Rail each have a variety of other advantages and disadvantages. Bus Rapid Transit generally has the advantages of (1) being more flexible than Light Rail, (2) being able to phase-in service rather than having to wait for an entire system to be built, and (3) being used as an interim system until Light Rail is built. Light Rail has advantages, according to transit officials, associated with increased economic development and improved community image, which they believe justify higher capital costs. However, building a Light Rail system can have a tendency to provide a bias toward building additional rail lines in the future. Transit operators with experience in Bus Rapid Transit systems told us that one of the challenges faced by Bus Rapid Transit is the negative stigma potential riders attach to buses. Officials from FTA, academia, and private consulting firms also stated that bus service has a negative image, particularly when compared with rail service. Communities may prefer Light Rail systems in part because the public sees rail as faster, quieter, and less polluting than bus service, even though Bus Rapid Transit is designed to overcome those problems. FTA officials said that the poor image of buses was probably the result of a history of slow bus service due to congested streets, slow boarding and fare collection, and traffic lights. FTA believes that this negative image can be improved over time through bus service that incorporates Bus Rapid Transit features. A number of barriers exist to funding improved bus systems such as Bus Rapid Transit. First, an extensive pipeline of projects already exists for the New Starts Program. Bus Rapid Transit is a relatively new concept, and many potential projects have not reached the point of being ready for funding consideration because many other rail projects are further along in development. As of March 2002, only 1 of the 29 New Starts projects with existing, pending or proposed grant agreements uses Bus Rapid Transit, and 1 of the 5 other projects near approval plans to use Bus Rapid Transit. Some Bus Rapid Transit projects do not fit the exclusive right-of- way requirements of the New Starts Program and thus would not be eligible for funding consideration. FTA also administers a Bus Capital Program with half the funding level of the New Starts Program; however, the existing Bus Capital Program is made up of small grants to a large number of recipients, which limits the program’s usefulness for funding major projects. Although FTA is encouraging Bus Rapid Transit through a Demonstration Program, this program does not provide funding for construction but rather focuses on obtaining and sharing information on projects being pursued by local transit agencies. Eleven Bus Rapid Transit projects are associated with this demonstration program.
The Federal Transportation Administration's (FTA) New Starts Program helps pay for designing and constructing rail, bus, and trolley projects through full funding grant agreements. The Transportation Equity Act for the 21st Century (TEA-21), authorized $6.1 billion in "guaranteed" funding for the New Starts program through fiscal year 2003. Although the level of New Starts funding is higher than ever, the demand for these resources is also extremely high. Given this high demand for new and expanded transit facilities across the nation, communities need to examine approaches that stretch the federal and local dollar yet still provide high quality transit services. Although FTA has been faced with an impending transit budget crunch for several years, it is likely to end the TEA-21 authorization period with $310 million in unused New Starts commitment authority if its proposed fiscal year 2003 budget is enacted. Bus Rapid Transit is designed to provide major improvements in the speed and reliability of bus service through barrier-separated busways, buses on High Occupancy Vehicle Lanes, or improved service on arterial streets. GAO found that Bus Rapid Transit was a less expensive and more flexible approach than Light Rail service because buses can be rerouted more easily to accommodate changing travel patterns. However, transit officials also noted that buses have a poor public image. As a result, many transit planners are designing Bus Rapid Transit systems that offer service that will be an improvement over standard bus service (see GAO-02-603).
From fiscal years 2003 through 2011, ORR cared for fewer than 10,000 unaccompanied children per year. Beginning in fiscal year 2012, the number of unaccompanied children apprehended at the southwest border by DHS and transferred to ORR custody rose to unprecedented levels and peaked in fiscal year 2014 at nearly 57,500 (see fig. 1). While the number of children served by ORR in fiscal year 2015 was less than the number served in fiscal year 2014, it was still higher than in previous years. Further, DHS data show that the number of unaccompanied children apprehended at the southwest border in fiscal 2016 through January is more than double the number apprehended during the same time period in fiscal year 2015. In response to the increased number of unaccompanied children in recent years, particularly in fiscal year 2014, ORR increased its shelter capacity (the number of beds it has available). We found that ORR was initially unprepared to care for the rapid increase in children needing services; however, ORR solicited new grantees to provide shelter services in both 2013 and 2014 and awarded additional cooperative agreements. From fiscal year 2011 through June of fiscal year 2015, the number of ORR grantees increased from 27 that operated 59 facilities to 57 that operated 140 facilities, and the number of beds available to serve unaccompanied children increased from almost 1,900 to nearly 7,800. The number of beds ORR needs depends on the number of unaccompanied children in its custody and how long these children stay in grantee facilities before they can be placed with sponsors. To further manage its capacity to care for the increased number of children, ORR updated policies and procedures to reduce the number of days children spend in its custody and expedite their release to sponsors. Specifically, ORR simplified documentation requirements for sponsors by eliminating notarization requirements and allowing photocopies (rather than original copies) of supporting documentation, such as birth certificates. ORR also removed the fingerprinting component of background checks for parents and legal guardians with no criminal or child abuse history, reduced the maximum number of days between approval of a child’s release and actual discharge, and in some cases paid for a child’s travel to the sponsor. According to shelter staff, these changes were feasible, in part, because most children come with contact information for a relative who can serve as a sponsor. Agency officials also noted that they can now more quickly release children to their parents or other relatives. We also found that ORR is taking other actions to ensure it has the capacity to meet demand caused by increases in the number of unaccompanied children and to minimize the risks of not being able to provide care and services to these children. Specifically, ORR developed a framework for fiscal year 2015 that included plans and steps to manage its capacity, based in part on the record levels of children needing care in 2014. This framework outlines its plans to continually monitor data on the referrals of unaccompanied children and other indicators, such as apprehensions and releases, to help it assess its capacity needs. It also includes key information ORR should have and mechanisms that should be in place to meet its needs, such as an inventory of available beds, timelines and decision points for determining if and when bed capacity should be increased, and ways to operationalize these decisions. ORR’s bed capacity framework for fiscal year 2015 was based on the number of children served in fiscal year 2014. The number of children referred to ORR through most of fiscal year 2015, while high by historical standards, was less than expected, and ORR grantees had many unoccupied beds. However, the number of referrals began increasing toward the end of the summer and has remained relatively high through the beginning of fiscal year 2016. While developing the framework was a positive step and ORR officials said they continue to use the capacity framework as a “roadmap,” we found that they have not updated this framework for fiscal year 2016 and have not established a systematic approach to update the framework on an annual basis to account for new information so that it remains current and relevant to changing conditions. According to federal standards for internal control, an agency’s processes for decision making should be relevant to changing conditions and completely and accurately documented. We concluded that not having a documented and continually updated process for capacity planning may hinder ORR’s ability to be prepared for an increase in unaccompanied children while at the same time minimizing excess capacity to conserve federal resources. We recommended in our February 2016 report that the Secretary of HHS direct ORR to develop a process to update this bed capacity framework on an annual basis. HHS concurred with our recommendation. ORR relies on grantees to provide care for unaccompanied children, such as housing and educational, medical, and therapeutic services, and to document in children’s case files the services they provide. However, in our February 2016 report we found that documents were often missing from the 27 randomly selected case files we reviewed. Specifically, 14 case files were missing a legal presentation acknowledgement form, 10 were missing a record of group counseling sessions, and 5 were missing clinical progress notes. Grantees are required to provide these services and document that they did so. In addition, we identified several cases in which forms that were present in the files were not signed or dated. Although ORR uses its web-based data system to track some information about the services children receive, and grantees report on the services they provide in their annual reports, the documents contained in case files are the primary source of information about the services provided to individual children. Without all of the documents included in the case files, it is difficult for ORR to verify that required services were actually provided in accordance with ORR policy and cooperative agreements. ORR’s most comprehensive monitoring of grantees occurs during on-site monitoring visits. However, we found that onsite visits to facilities has been inconsistent. According to ORR documents, during on-site monitoring visits, ORR project officers spend a week at facilities touring, reviewing children’s case files and personnel files, and interviewing children and staff. Prior to fiscal year 2014, project officers were supposed to conduct on-site monitoring of facilities at least once a year. However, our review of agency data found that many facilities went several years without receiving a monitoring visit. For example, ORR did not visit 15 facilities for as many as 7 years. In 2014, ORR revised its on- site monitoring program to ensure better coverage of grantees and implemented a biennial on-site monitoring schedule. Nevertheless, ORR did not meet its goal to visit all of its facilities by the end of fiscal year 2015, citing lack of resources. Monitoring visits are intended to provide an opportunity to identify program deficiencies or areas where programs are failing to comply with ORR policies. According to standards for internal control, management should establish and operate monitoring activities to monitor the internal control system and evaluate the results. Monitoring generally should be designed to assure that it is ongoing and occurs in the course of normal operations, is performed continually, and is ingrained in the agency’s operations. We concluded that without consistently monitoring its grantees, ORR cannot know whether they are complying with their agreements and that children are receiving needed services. We recommended in our February 2016 report that the Secretary of HHS direct ORR to review its monitoring program to ensure that onsite visits are conducted in a timely manner, case files are systematically reviewed as part of or separate from onsite visits, and that grantees properly document the services they provide to children. HHS concurred, and in its response to the report described several of its other monitoring efforts, and stated that it has created a new monitoring initiative workgroup to examine opportunities for further improvement. ORR grantees that provide day-to-day care of unaccompanied children are responsible for identifying and screening sponsors prior to releasing children to them. During children’s initial intake process, case managers ask them about potential sponsors with whom they hope to reunite. Within 24 hours of identifying potential sponsors, case managers are required to send them a Family Reunification Application to complete. The application includes questions about the sponsor and other people living in the sponsor’s home, including whether anyone in the household has a contagious disease or criminal history. Additionally, the application asks for information about who will care for the child if the sponsor is required to leave the United States or becomes unable to provide care. Sponsors are also asked to provide documents to establish their identity and relationship to the child. Grantees conduct background checks on potential sponsors. The types of background checks conducted depend on the sponsor’s relationship to the child (see table 1). In certain circumstances prescribed by the Trafficking Victims Protection Reauthorization Act or ORR policy, a home study must also be conducted before the child is released to the sponsor. Additionally, in certain situations, such as where there is a documented risk to the safety of the unaccompanied child, the child is especially vulnerable, and/or the case is being referred for a mandatory home study, other household members are also subjected to background checks. In our February 2016 report, we found that between January 7, 2014, and April 17, 2015, nearly 52,000 children from El Salvador, Guatemala, or Honduras were released to sponsors by ORR. Of these children, nearly 60 percent were released to a parent. Fewer than 9 percent of these children were released to a non-familial sponsor, such as a family friend, and less than 1 percent of these children were released to a sponsor to whom their family had no previous connection (see table 2). In the fall of 2014, ORR officials told us that they had not seen evidence that adults are fraudulently sponsoring unaccompanied children. Nonetheless, ORR officials told us that ORR has been monitoring the number of children it releases to sponsors, through its web-based portal, to help ensure that individuals are not sponsoring too many children unrelated to them. In August 2015, two individuals pleaded guilty to charges related to luring Guatemalan children into the United States on false pretenses in 2014. According to the indictment, one of the individuals submitted fraudulent information to ORR officials to obtain custody of six children, among other things. There is limited information available about the services provided to unaccompanied children after they leave ORR custody. According to ORR officials, a relatively small percentage of unaccompanied children received post-release services, and they said ORR’s responsibility for the other children typically ended once it transferred custody of the children to their sponsors. The Trafficking Victims Protection Reauthorization Act requires ORR to provide post-release services to children if a home study was conducted, and authorizes ORR to provide these services to some additional children. According to ORR data, in fiscal year 2014, slightly less than 10 percent of unaccompanied children received post-release services, including those for whom a home study was conducted. Post- release services are limited in nature and typically last a relatively short time. These services include direct assistance to the child and sponsor by ORR grantees in the form of guidance to the sponsor to ensure the safest environment possible for the child, as well as assistance accessing legal, medical, mental health, and educational services, and initiating steps to establish guardianship, if necessary. These services can also include providing information about resources available in the community and referrals to such resources. Recently, ORR has taken steps to expand eligibility criteria for post- release services to additional children. According to ORR officials, all children released to a non-relative or distant relative are now eligible for such services. In addition, in May 2015, ORR began operating a National Call Center help-line. Children who contact ORR’s National Call Center within 180 days of release who have experienced or are at risk of experiencing a placement disruption are also now eligible for post-release services according to ORR officials. And in August 2015, ORR instituted a new policy requiring grantee facility staff to place follow-up calls, referred to as Safety and Well Being follow up calls, to all children and their sponsors 30 days after the children are placed to determine whether they are still living with their sponsors, enrolled in or attending school, aware of upcoming removal proceedings, and safe. ORR policy requires grantees to attempt to contact the sponsor and child at least three times. Although there is limited post-release information for unaccompanied children, ORR is in a position to compile and share the data it collects internally and with other federal and state agencies to help them better understand the circumstances these children face when they are released to their sponsors. This is because ORR already has some information from its post-release grantees on services provided to children after they leave ORR custody, and its newly instituted well-being calls and National Call Center allow it to collect additional information about these children. However, ORR does not have processes to ensure that all of these data are reliable, systematically collected, and compiled in summary form to provide useful information about this population for its use and for other government agencies, such as state child welfare services. Federal internal control standards require that an agency have relevant, reliable, and timely information to enable it to carry out its responsibilities. As a result, in our February 2016 report, we recommended that the Secretary of HHS direct ORR to develop a process to ensure all information collected through its existing post-release efforts are reliable and systematically collected so that the information can be compiled in summary form and provide useful information to other entities internally and externally. HHS concurred and stated that ORR will implement an approved data collection process that will provide more systematic and standardized information on post-release services and that it would make this information available to other entities internally and externally. We found that services available to unaccompanied children through local service providers are typically the same as those available to other children without lawful immigration status. For example, children without lawful immigration status are generally not eligible for federal benefits, such as the Supplemental Nutrition Assistance Program, Medicaid, and Temporary Assistance for Needy Families; however, they are eligible for other federal benefits such as emergency medical assistance, some public health assistance, and school meals. Local service providers we spoke with in six counties told us that the children’s status would have no effect on eligibility for many of the services they provide. For example, school districts are required to educate students regardless of their immigration status. Similarly, unaccompanied children were not precluded from receiving services at health clinics we spoke with. Some local service providers expressed concerns that unaccompanied children might have unmet needs or face barriers to receiving some necessary services. For example, representatives we spoke with in four of the six school districts, as well as representatives from a county office of education, discussed the mental and behavioral health needs of these children. Similarly, local services providers told us these children had previous exposure to violence and trauma and in some cases experienced challenges related to reunification with parents they had not seen for many years. Six service providers said that these factors could contribute to behavioral and mental health needs or make the children more susceptible to gang recruitment and trafficking. Some school district and other service providers reported challenges such as attracting bilingual professionals, such as mental health providers, making it difficult for these children to obtain needed services. In addition, unaccompanied children also face barriers similar to those faced by other children without lawful immigration status such as lack of health insurance, lack of knowledge about where to seek services, fear of disclosing their immigration status, and language barriers. We also found that the level of awareness about, and services available to, unaccompanied children varied across the jurisdictions in which we spoke with stakeholders, with some jurisdictions appearing to have more resources than others. For example, in one jurisdiction we visited, the mayor’s office had established a working group related to unaccompanied children that included representatives from several city departments and nonprofits. In this city, representatives from the health and education departments regularly attended immigration court to screen and enroll children in the state’s Children’s Health Insurance Program and to help with school enrollment. Conversely, representatives from two other mayors’ offices told us that they were unaware that unaccompanied children were living in their city or had limited knowledge about the issue. With respect to unaccompanied children’s immigration proceedings, we found that there are several possible outcomes and that the outcomes for many children have not yet been determined. An unaccompanied child who is in removal proceedings can apply for various types of lawful immigration status with DHS’s U.S. Citizenship and Immigration Services (USCIS), including asylum and Special Immigration Juvenile status. USCIS’s asylum officers have initial jurisdiction of any asylum application filed by an unaccompanied child, even if a child is in removal proceedings. In July 2015, the Associate Director of the Refugee, Asylum and International Operations Directorate at USCIS testified that USCIS has received increasing numbers of asylum applications from unaccompanied children in recent years. USCIS received 534 such applications in fiscal year 2011 and 6,990 in fiscal year 2014. The Associate Director testified that since fiscal year 2009, USCIS has granted asylum to unaccompanied children at a rate of 42.6 percent, similar to the overall rate at which all new asylum applications were approved. If unaccompanied children have not yet sought, or are not granted, certain immigration benefits within the jurisdiction of USCIS, there are several other possible outcomes and various forms of relief that may be available to them during immigration proceedings. For example, an immigration judge may order them removed from the United States, administratively close their case, terminate their case, allow them to voluntary depart the United States, or grant them relief or protection from removal. From July 18, 2014, when DOJ’s Executive Office for Immigration Review began to consistently use a code to identify cases involving unaccompanied children, to July 14, 2015, DHS initiated more than 35,000 removal proceedings for unaccompanied children. Of these 35,000 removal proceedings, EOIR data indicate that as of July 14, 2015, an immigration judge issued an initial decision in nearly 13,000 proceedings (or 36 percent). Of those 13,000 decisions, about 7,000 (or 55 percent) resulted in a removal order for the unaccompanied child. According to EOIR data, about 6,100 (or 88 percent) of those initial decisions that resulted in removal orders were issued in absentia, which is when a child fails to appear in court for their removal proceedings and the immigration judge conducts the proceeding in the child’s absence. However, a judge’s initial decision does not necessarily indicate the end of the removal proceedings. For example, cases that are administratively closed can be reopened, new charges may be filed in cases that are terminated, and children may appeal a removal order. In addition, a child who receives a removal order in absentia, and with respect to whom a motion to reopen their case has been properly filed, is granted a stay of removal pending a decision on the motion by the immigration judge. Overall, according to DHS’s Immigration and Customs Enforcement (ICE) data, from fiscal year 2010 through August 15, 2015, based on final orders of removal, ICE removed 10,766 unaccompanied children, 6,751 of whom were from El Salvador, Guatemala, or Honduras. Chairman Grassley, Ranking Member Leahy, and Members of the Committee, this concludes my prepared remarks. I would be happy to answer any questions that you may have. For further information regarding this testimony, please contact Kay E. Brown at (202) 512-7215 or brownke@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony include Gale Harris (Assistant Director), David Barish (Analyst-in-Charge), James Bennett, Ramona Burton, Jamila Jones Kennedy, Jean McSween, James Rebbe, Almeta Spencer, and Kate van Gelder. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
ORR is responsible for coordinating and implementing the care and placement of unaccompanied children. The number of children placed in ORR's care rose from nearly 6,600 in fiscal year 2011 to nearly 57,500 in fiscal year 2014. GAO was asked to review how ORR managed their care. This testimony is based on GAO’s February 2016 report and addresses (1) ORR's response to the increase in unaccompanied children, (2) how ORR cares for children in its custody and monitors their care, (3) how ORR identifies and screens sponsors for children, and (4) what is known about services, challenges, and the status of removal proceedings for children after they leave ORR custody. For its February 2016 report, GAO reviewed relevant federal laws and regulations, ORR policies, and ORR and Executive Office for Immigration Review data. GAO also visited nine ORR grantee facilities in three states selected to vary in the type of care provided, shelter size, and location, and conducted a random, non-generalizable case file review of 27 case files of children released from these facilities. GAO interviewed agency officials and community stakeholders in six counties that received unaccompanied children, representing diversity in geographic location, size, and demographics. In fiscal year 2014, nearly 57,500 children traveling without their parents or guardians (referred to as unaccompanied children) were apprehended and transferred to the care of the Department of Health and Human Services' Office of Refugee Resettlement (ORR). Most of these children were from Central America. GAO found that ORR was initially unprepared to care for that many children; however, the agency increased its bed capacity. Given the unprecedented demand for capacity in 2014, ORR developed a plan to help prepare it to meet fiscal year 2015 needs. The number of children needing ORR's care declined significantly through most of fiscal year 2015, but began increasing again toward the end of the summer. Given the inherent uncertainties associated with planning for capacity needs, ORR's lack of a process for annually updating and documenting its plan inhibits its ability to balance preparations for anticipated needs while minimizing excess capacity. ORR relies on grantees to provide care for unaccompanied children, including housing and educational, medical, and therapeutic services. GAO's review of a sample of children's case files found that they often did not contain required documents, making it difficult to verify that all required services were provided. ORR revised its on-site monitoring program in 2014 to ensure better coverage of grantees. However, ORR was not able to complete all the visits it planned for fiscal years 2014 and 2015, citing lack of resources. By not monitoring its grantees consistently, ORR may not be able to identify areas where children's care is not provided in accordance with ORR policies and the agreements with grantees. ORR grantees conduct various background checks on potential sponsors prior to releasing children to them. These potential sponsors are identified and screened by the grantees as part of their responsibilities for the unaccompanied children in their care. The extent of the checks conducted depends on the relationship of the sponsor to the child. Between January 2014 and April 2015, ORR released nearly 52,000 children from El Salvador, Guatemala, or Honduras to sponsors. In nearly 90 percent of these cases, the sponsors were a parent or other close relative already residing in the United States. Sponsors do not need to have legal U.S. residency status. There is limited information available on post-release services provided to children after they leave ORR care. In part, this is because ORR is only required to provide services to a small percentage of children, such as those who were victims of trafficking. In May 2015, ORR established a National Call Center to assist children who may be facing placement disruptions, making post-release services available to some of them. Also, in August 2015, ORR began requiring well-being follow-up calls to all children 30 days after their release. ORR is collecting information through these new initiatives, but does not currently have a process to ensure that the data are reliable, systematically collected, or compiled in summary form. Service providers GAO spoke with also noted that some of these children may have difficultly accessing services due to the lack of bilingual services in the community, lack of health insurance, or other barriers. In its February 2016 report, GAO recommended that HHS (1) develop a process to regularly update its capacity plan, (2) improve its monitoring of grantees, and (3) develop processes to ensure its post-release activities provide reliable and useful summary data. HHS agreed with GAO's recommendations.
FMS receives payment records from and makes payments on behalf of most federal agencies. However, a number of federal agencies have their own disbursing authority. For example, USPS paid about $42 billion in salary and benefits to almost 800,000 career employees in calendar year 1999, and entered into more than 47,000 contracts with vendors in calendar year 1998, totaling almost $8 billion. DOD disbursed over $295 billion in fiscal year 2000, including about $150 billion in contractor and vendor payments and about $100 billion in salary and retirement payments. In addition, Medicare contractors processed over 900 million fee-for-service claims during fiscal year 2000, totaling nearly $175 billion. In addition to disbursing payments for various federal agencies, FMS provides centralized debt collection services for most federal agencies. To aid in federal debt collection, FMS has in place the Treasury Offset Program, which uses a centralized database of delinquent debts that have been referred for offset against federal payments. This database includes federal nontax debts and federal tax debts, as well as state tax debts and child support debts. FMS currently matches federal tax refunds, federal retirement and vendor payments, and certain federal salary and social security benefit payments against its database of delinquent debts, and when a match of both TIN and name control occurs, FMS offsets the payment, thereby reducing or eliminating the debt. FMS plans to include some non-FMS disbursed federal salary payments in the Treasury Offset Program in the latter half of 2001. A provision included in the Taxpayer Relief Act of 1997 enhanced IRS’ ability to collect delinquent federal tax debt by authorizing IRS to continuously levy up to 15 percent of certain federal payments made to delinquent taxpayers. FMS modified the Treasury Offset Program to enable IRS to electronically serve a tax levy to FMS once IRS has notified the delinquent taxpayer of the pending levy. In July 2000, IRS began adding tax debts to FMS’ database of delinquent federal debts, thus initiating the continuous levy program. For this program, FMS compares federal payee information from agency payment records with IRS’ accounts receivable records. When a match of both the TIN and name control occur, FMS informs IRS of the match and IRS then notifies the taxpayer of the pending tax levy. If the taxpayer fails to make an effort to satisfy the tax debt within 30 days, such as by payment in full or entering into an installment agreement, IRS will then instruct FMS to begin levying 15 percent of subsequent payments made to the taxpayer or the exact amount of tax owed if it is less than 15 percent of the next payment. For payments disbursed on behalf of other agencies, FMS deducts the amount to be levied before making the payment, and the levied amount is then credited to IRS. In an April 2000 report, we estimated that IRS could potentially collect as much as $478 million annually through this program. Based on matching federal payments made by the agencies to IRS’ accounts receivable data, we estimate that including payments disbursed by USPS, DOD, and CMS in the continuous levy program could result in recovering at least $270 million annually from about 70,000 delinquent taxpayers. An additional $16 million in delinquent taxes could be recovered annually from about 656 vendors if IRS were to provide FMS with the different names these vendors have used for tax purposes when FMS matches vendor payment data against IRS’ accounts receivable data. Our analysis of IRS’ accounts receivable data as of June 30, 2000, showed that about 70,400 taxpayers received about $1.9 billion in payments--about $8.2 billion on a annualized basis--from either USPS, DOD, or CMS, and the TIN and name on their payment records exactly matched the TIN and name on IRS’ accounts receivable records. These taxpayers owed over $1 billion in delinquent taxes at the time they received these payments and met IRS’ criteria to be included in the continuous levy program. As shown in table 1, we estimate that IRS could recover as much as $277.5 million annually if these payments were included in the continuous levy program. Almost half of the $277.5 million in delinquent taxes that could be recovered would come from vendor payments. The rest would come from wage and salary payments to employees and retirement payments. The amount of delinquent taxes recovered annually could be somewhat lower because some taxpayers might make other arrangements with IRS to resolve their tax debts once they receive a notice of levy. For example, in an effort to avoid a pending tax levy, some taxpayers might contact IRS to arrange to pay their delinquent tax in full or through entering into an installment agreement or submitting an offer-in-compromise. However, such actions on the part of the taxpayer in response to the levy notice would be an added benefit of the program. Although the amount of delinquent taxes recovered could be somewhat lower, as noted earlier, our estimates of the amount of delinquent taxes that might be recovered are understated because we did not receive data for over 50 percent of the Medicare vendor payments made for the time period we reviewed. In addition, we were unable to match about $3.4 billion in DOD vendor payments against IRS’ accounts receivable data because DOD’s payment records did not contain a TIN. According to DOD officials, DOD has recently increased its emphasis on requiring vendors to provide a TIN when registering to do business with DOD. Under procedures for vendor payments that are paid by FMS and currently subject to continuous levy, IRS’ file of accounts receivable data provided to FMS includes only the most recent name a vendor has used for tax purposes. As a result, FMS’ ability to exactly match the vendor name on payment records against IRS’ tax debts is limited. IRS already makes additional names for individual taxpayers included in its databases available to FMS for use in the existing continuous levy program. For example, if taxpayers change their name when they marry, the name used as a single person would be sent to FMS along with their married name. This is not the case for businesses. For vendor payments currently paid by FMS and thus included in the continuous levy program, if a business were to change its name on its federal tax return, IRS would provide FMS with the most current name in its records, but not the prior name. When making our overall estimates of delinquent taxes that could be recovered if USPS, DOD, and CMS Medicare vendor payments were included in the continuous levy program, we determined the amount of additional revenue that could be raised if IRS changed its policy and provided FMS with all of the names it has for vendors. In addition to the 70,400 taxpayers whose TIN and name on the payment records exactly matched the TIN and name on IRS’ accounts receivable records, we found 1,228 instances in which the TIN on the vendor payment records exactly matched the TIN on IRS’ accounts receivable records, but the name on the payment records did not exactly match the name on IRS’ records. For 656 of the 1,228 vendors, we found different names used by these vendors in an IRS database that showed they were in fact the delinquent taxpayers. There were no additional names in the IRS database for the remaining 572 vendors. The 656 taxpayers for which there were additional names owed about $26 million in delinquent taxes. We estimate that IRS could recover about $16 million annually if the different names it has for vendors were provided to FMS for the continuous levy program. If IRS were to provide FMS with the different names it has for business taxpayers, this would benefit the current continuous levy program by increasing the instances in which FMS could match the name in both records, as required before a levy can be made. IRS officials agreed and indicated that providing such a file of additional business names to FMS could be done and would be well worth the effort. FMS officials indicated they were in favor of receiving additional business names for use in the continuous levy program. Whether federal payments made by USPS, DOD, and CMS could be included in the continuous levy program and, if so, when varied by agency and type of payment. FMS plans to receive and include USPS and DOD salary and wage payments, as well as military retirement payments, in the Treasury Offset Program within the next 3 years, thus making them available for continuous levy and enabling IRS to begin collecting about half of the $277.5 million in potential annual tax recoveries mentioned earlier. Vendor payments could also be included in the continuous levy program, with the full range of USPS payments possibly included in less than a year, DOD payments possibly included within 3 years, and CMS payments possibly included within about 5 years. However, with the exception of some DOD vendor payments, officials from FMS, IRS, and the three agencies have not discussed when and how all of these agencies’ vendor payments could be included in the continuous levy program and whether practical options exist to include some portion of the vendor payments in the program before all such payments are available. FMS officials stated that their discussions with USPS have focused on including salary payments in the Treasury Offset Program rather than vendor payments. USPS plans to provide employee salary payments to FMS for inclusion in the Treasury Offset Program, and FMS is working with USPS to develop a specific implementation date. According to FMS officials, once USPS salary payments are available for the Treasury Offset program, they could be included in the continuous levy program about a month later. USPS officials stated that, although they have not had any recent discussions with FMS about including vendor payments in the Treasury Offset Program, they do not believe any obstacles would prevent making vendor payments available to FMS, since all USPS vendor payments are disbursed from one payment center. Officials indicated that within about 4 months of FMS’ requesting them to do so, they could likely be ready to provide vendor payments to FMS and to levy payments for which FMS indicates a match with IRS’ accounts receivable data. USPS officials did say that levying vendor payments could present some challenges. For example, USPS vendor payments generally are not made on a particular schedule, but rather, are controlled by terms specified in individual contracts. As a result, unlike biweekly salary payments, USPS disburses vendor payments daily throughout the business week. Therefore, vendor data exchanges between USPS and FMS would likely have to occur with greater frequency than salary data exchanges. However, USPS officials stated that the Prompt Payment Act requires that vendor payments be deferred until the pay cycle immediately preceding the payment due date. This should provide an adequate interval to offset such payments, particularly if the vendor data exchanges with FMS were to occur either weekly or biweekly. USPS officials also stated that USPS does not currently offset vendor payments to recover debts owed to USPS, and therefore, specific offset procedures would have to be developed. However, these officials were confident that they could modify the USPS system to enable them to flag any vendor payments requiring offset identified through the Treasury Offset Program. They further stated that such an offset would require manual intervention to make the offset and reconcile the vendor’s account. Although USPS officials said that they could make vendor payments available to FMS within about 4 months of FMS’ requesting such data, USPS and FMS officials have not discussed specific arrangements for doing so, such as when FMS could be ready to receive USPS vendor payment data or how long it might take USPS to develop procedures for performing such offsets. FMS is working with DOD to include civilian, military retirement, and military active duty payments in the Treasury Offset Program, thus eventually making these types of payments available for the continuous levy program. According to DOD officials, the approximate timeframes that have been established for providing DOD payments to FMS are as follows: DOD civilian salary payments in the latter part of 2001, DOD military retirement payments in 2002, and DOD military active duty payments in 2003. DOD has also initiated preliminary discussions with FMS about providing some vendor payments to FMS. These payments are all made from one payment system maintained at one DOD Defense Finance and Accounting Service (DFAS) location and accounted for about 48 percent of all DOD vendor payments made in fiscal year 2000. However, DOD officials have not specifically discussed providing other vendor payments to FMS in the near future, and they have concerns regarding the current capability to make other vendor payments available for the continuous levy program because of DOD’s decentralized vendor payment systems. For example, vendors providing goods and services to three of the military branches– Army, Air Force, and Navy–are paid from separate vendor payment systems maintained at various DFAS locations. In addition, there are separate vendor payment systems for processing certain specialty items, such as fuels and commissary resale products. DOD officials stated that DFAS staff do not currently have the capability to track multiple payments made from the various vendor payment systems to a particular vendor. As a result, if they were to provide vendor payments to FMS from these decentralized payment systems, DOD officials were concerned that there would be a risk of offsetting more in payments than a vendor might owe in delinquent taxes. Although DOD officials expressed concerns about offsetting more in payments than a vendor might owe in delinquent taxes, IRS officials indicated there are controls in the continuous levy program to prevent such overpayments. For example, IRS provides FMS with a weekly file updating the balance due for each account subject to continuous levy. In addition, FMS has the capability to update the balance due for each account after each payment is levied, thus enabling FMS to identify when a tax debt has been reduced to zero. In addition, selected staff in each IRS office are authorized to directly access FMS’ levy database to rescind a levy if necessary, such as for taxpayers subject to a continuous levy that decide to either fully pay the tax debt or enter into an installment agreement. FMS and IRS officials have not discussed these controls with DOD to determine whether they would mitigate DOD’s overpayment concerns and pave the way for other types of vendor payments to be provided to FMS for the continuous levy program, in addition to those vendor payments currently under consideration. DOD is currently developing a centralized vendor payment system that could increase its capability to eventually provide all vendor payments to FMS. According to DOD officials, the multiple vendor payment systems currently in use are to be replaced by a single system known as the Defense Procurement Payment System. The latest DOD estimate indicates that the initial phase for implementing the new system will begin in the latter part of fiscal year 2001. DOD officials estimate that the new system may be fully operational by the latter part of fiscal year 2003 or the early part of fiscal year 2004. However, they indicated that this is a “best-case” scenario. FMS and CMS have not held any discussions related to including Medicare vendor payments in the continuous levy program. CMS and Medicare contractors we spoke with agreed that including all Medicare payments in the continuous levy program would not be possible for several years owing to the decentralized payment system in which the Medicare program operates. CMS administers the Medicare program through about 50 health care contractors, which process and pay over 900 million fee-for- service claims totaling nearly $175 billion annually. These contractors are responsible for verifying the accuracy of the name and TIN used by health care providers that bill the Medicare program for reimbursement. Thus, the ability to identify and subsequently levy the payments made to Medicare providers who owe federal taxes would depend on establishing effective coordination between IRS and FMS and each of the contractors that pay the claims. The possibility of including Medicare vendor payments in the continuous levy program is further complicated because CMS contractors currently use one of six different computerized systems to process and pay claims. Although CMS eventually plans to have all of its contractors use one of three standardized claims processing systems, this consolidation is not expected to be completed before 2004. The contractors responsible for maintaining the three standardized systems believe that integrating a continuous levy process into Medicare claims processing systems is possible, but the systems would likely have to be modified and tested before implementation. Planned enhancements to the CMS accounting and provider enrollment systems may improve the likelihood that Medicare vendor payments could be included in the continuous levy program in the future. For example, in order to comply with federal financial management systems requirements, the agency is developing the CMS Integrated General Ledger and Accounting System. As currently envisioned by CMS, this system would contain detailed information on each Medicare claim paid, and as such, might offer FMS and IRS a central point of coordination for continuously levying Medicare vendor payments. Also, CMS is developing a centralized database of updated information on all health care providers that bill the Medicare program. This system is intended to help ensure that only qualified providers with a valid TIN enroll in and receive payments from the Medicare program. Once fully operational, this system is expected to interface with other CMS systems, thereby helping to ensure that the name and TIN used by providers have been validated by IRS. Neither system is scheduled to be fully operational before late 2006. Although these new systems may improve the likelihood that CMS vendor payments could be continuously levied in the future, FMS and CMS officials have not held discussions to ensure this result. Medicare contractors already offset payments to vendors for various reasons, such as recovery of previously overpaid claim amounts, which could result from either inadvertent billing errors or intentional misrepresentations. However, FMS and CMS officials have not explored whether these processes for offsetting vendor payments could support including some CMS vendor payments in the continuous levy program before late 2006. In addition to the specific levy authority IRS has through the continuous levy program under section 6331(h), IRS has general levy authority under Internal Revenue Code section 6331 to collect federal tax debts by issuing a levy notice directly to a federal agency. The continuous levy program provides IRS with an automated process for serving tax levies and collecting delinquent taxes through FMS. On the other hand, in order to levy payments under its general levy authority IRS must identify that an agency is making payments to a delinquent taxpayer. Unlike the 15-percent levy amount limitation for the continuous levy program, under its general levy authority, IRS can levy up to 100 percent of a taxpayer’s property and rights to property in some cases. IRS currently uses its general levy authority to levy federal salary and retirement payments. However, according to officials, IRS uses its general levy authority less frequently to levy federal vendor payments, partly because IRS has limited ability to identify and serve levies against vendor payments. According to IRS officials, almost all information IRS has on vendor payments comes from annual information returns that federal agencies and contractors are required to file for such payments. It takes IRS several months to process information returns and make them available to collection staff so they can identify potential levy sources. According to IRS officials, information return data are of little use because there is no certainty that an individual or business that received payments in a past year would receive payments in the current year. IRS officials acknowledged that obtaining current information on taxpayers that may be receiving DOD and CMS vendor payments might give IRS collection staff an opportunity to levy such payments under its general levy authority until such time as these payments could be included in the continuous levy program. DOD and CMS have databases that could be used to provide IRS with current information concerning individuals and businesses receiving vendor payments. However, IRS has not requested such information from these agencies. According to DOD officials, a DOD Central Contractor Register currently includes information on over 160,000 vendors registered to do business with DOD, including a vendor’s TIN and name, and an extract of this information could be provided periodically to IRS.Medicare contractors we spoke with stated that it may be possible to provide periodic extracts of payment data on recently paid provider claims, while CMS officials indicated that extracts from centralized agency databases, such as the National Claims History File, could also be made available to IRS. Information from each of these databases could be useful to IRS for identifying a current source against which to serve a levy under IRS’ general levy authority. For example, IRS could arrange to obtain information from these agencies concerning vendors that currently receive periodic payments and when such payments are made, and if such vendors have federal tax delinquencies, work out a schedule for levying subsequent payments. As with IRS’ other collection efforts, resource constraints and other collection priorities may limit the amount of delinquent taxes that IRS could recover from DOD and CMS vendors using its general levy authority. However, until all such vendor payments could be included in the continuous levy program, obtaining periodic vendor information from these agencies could enable IRS to begin collecting some portion of the delinquent taxes owed by these vendors. IRS’ mission includes providing taxpayers with top quality service by applying the tax law with integrity and fairness to all. Until more types of federal payments are available, the current continuous levy program results in unequal treatment of delinquent taxpayers depending on whether their federal payments are made by FMS on behalf of other agencies or directly by the agencies themselves. Delinquent taxpayers receiving payments from FMS generally are subject to the continuous levy program; those receiving payments directly from federal agencies are not and IRS is limited to using its general levy authority in order to levy some of these non-FMS payments. Although practical issues may impede achieving similar treatment of all delinquent taxpayers receiving federal payments, progress could be made and substantial additional revenues could be collected—in fairness to those who properly pay their taxes. FMS has plans for including USPS salary and DOD salary and retirement payments in the continuous levy program. Similar plans do not exist, however, for including all vendor payments from USPS, DOD, and CMS in the continuous levy program. Discussions among FMS, IRS, and the agencies have the potential to ensure that all of these payments are included in the continuous levy program as soon as practical, and for possibly accelerating the inclusion of certain types or categories of vendor payments. Further, the effectiveness of the current continuous levy program and its expansion to other payments could be enhanced if IRS were to begin sharing the different names that businesses use for tax purposes with FMS. This would treat businesses more similarly to how IRS already handles individual taxpayers in the continuous levy program. In the interim, until the continuous levy program can be extended to more of the payments made directly by agencies, IRS’ use of its existing general levy authority could be improved to better ensure that all delinquent taxpayers receiving federal payments are subject to potential collection action. DOD and CMS have available data that could be shared with IRS to increase IRS’ ability to identify those taxpayers’ whose federal payments could be practically and effectively levied under the general levy program. To enhance the value of agency payment data that are available for the continuous levy program, we recommend that the Commissioner of Internal Revenue provide FMS with a file of all business names that IRS has for each business taxpayer that owes federal taxes and meets the program criteria. To increase the potential for collecting delinquent federal taxes owed by federal vendors, we recommend that the Commissioner of Internal Revenue and the Commissioner of the Financial Management Service jointly initiate specific discussions with USPS, DOD, and CMS to develop plans for obtaining vendor payments from the respective agencies for the continuous levy program. The discussions should cover plans for including all of the agencies’ vendor payments in the continuous levy program, as well as options for including some of their vendor payments in the program on an accelerated basis. To ensure that IRS has updated information on vendor payments to aid in identifying possible levy sources for use under its general levy authority, we recommend that the Commissioner of Internal Revenue work with DOD and CMS officials to develop the means for these agencies to periodically provide IRS with vendor information that is more current than that which IRS receives now through annual information returns. We received written comments on our draft report from the Commissioner of Internal Revenue (see app. II) and the Commissioner of the Financial Management Service (see app. III). Both the IRS and FMS Commissioners offered factual updates, clarifications, or technical comments that we have incorporated throughout this report where appropriate. The Commissioner of Internal Revenue generally agreed with our recommendations. Regarding our recommendation that the Commissioner of the Financial Management Service initiate discussions with USPS, DOD, CMS, and IRS officials to develop plans for obtaining vendor payments from the respective agencies for the continuous levy program, the Commissioner of FMS disagreed that initiating discussions with these agencies was FMS’ responsibility. Rather, the Commissioner stated that it was IRS’ responsibility to initiate and jointly schedule with FMS the implementation of the continuous levy program for DOD, USPS, and CMS vendor payments. The Commissioner further stated that once IRS is ready to develop this process, FMS will work with the agencies and IRS to make the necessary system changes to allow IRS to continuously levy these payments. We agree with the FMS Commissioner’s view that IRS has the responsibility to participate in leading discussions for implementing the continuous levy program for vendor payments. However, because FMS is a principal component in developing the necessary processes to effectively implement continuous levies, we also believe that FMS must be equally involved in the discussions on extending the continuous levy program to vendor payments paid by agencies other than FMS. Accordingly, we modified our recommendation to state that the IRS and FMS Commissioners should jointly initiate specific discussions with USPS, DOD, and CMS for this purpose. Having been made aware of this modification to our recommendation before providing comments, the IRS Commissioner agreed in his written comments to participate with FMS in discussions with the agencies and to assist FMS in developing plans for obtaining vendor payments for inclusion in the continuous levy program. To enhance the value of agency payment data available to the continuous levy program, the Commissioner of Internal Revenue agreed to provide FMS with a file of all business names that IRS has for each business taxpayer that owes federal taxes and meets the program criteria. The Commissioner stated that a draft Request for Information Services has been submitted to begin the formal process necessary to make this change, and the change is expected to be completed by January 2003. To ensure that IRS has updated information on vendor payments to aid in identifying possible levy sources for use under its general levy authority, the Commissioner agreed to pursue the costs and benefits of securing possible levy sources from such agencies as DOD as well as pursuing more frequent levy source updates from internal IRS sources. We also received written comments from the Deputy Chief Financial Officer, Office of the Under Secretary of Defense (see app. IV), and oral comments from a representative of the United States Postal Service, in which they generally agreed with our recommendations. In addition, we received technical comments from the Acting Deputy Administrator of the Centers for Medicare & Medicaid Services, in which he stressed that CMS vendor payments could not be included in the continuous levy program until a new CMS integrated accounting system is completed. Given the substantial delinquent taxes that could potentially be recovered from CMS vendors and that CMS contractors already offset vendor payments for various other reasons, we believe that discussions between IRS, FMS, and CMS should explore whether some portion of the vendor payments could be included on an accelerated basis. As agreed with your offices, unless you announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will send copies to the Ranking Minority Member, House Committee on Ways and Means; Ranking Minority Member, Subcommittee on Oversight; and the Chairman and Ranking Minority Member, Senate Committee on Finance. We will also send copies to the Commissioner of Internal Revenue, Commissioner of the Financial Management Service, Secretary of Defense, Administrator of the Centers for Medicare & Medicaid Services, Postmaster General, and other interested parties. Copies of this report will also be made available to others upon request. If you have any questions concerning this report, please contact Ralph Block at (415) 904-2000 or me at (202) 512-9110. Key contributors to this work are listed in appendix V. Our objectives in this report were to (1) determine the number of delinquent taxpayers receiving federal payments from the United States Postal Service (USPS), Department of Defense (DOD), and Centers for Medicare & Medicaid Services (CMS) that would be affected and the tax debt that might be recovered if they were to be included in the continuous levy program; (2) determine whether these types of payments could be included in the continuous levy program and the timeframes for doing so; and (3) identify other actions that could be taken to enhance IRS’ ability to manually levy federal payments to delinquent individuals and businesses that are not currently included in the continuous levy program. To determine the number of delinquent taxpayers receiving federal payments from USPS, DOD, and CMS that would be affected and the tax debt that might be recovered if they were included in the continuous levy program, we obtained and matched IRS’ accounts receivable records as of June 30, 2000, that met IRS’ continuous levy program criteria with agency and contractor payment records as follows: For wage and salary payments, USPS provided payments for a biweekly pay period made on June 23, 2000; for vendor payments, USPS provided payments made during the April through June 2000 quarter. For DOD military salary, retirement, and reserve payments, the DOD Defense Manpower Data Center provided payments made for the month of June 2000; for DOD civilian salary, the DOD Defense Management Data Center provided payments made for the biweekly pay period ending July 1, 2000; for DOD vendor and contractor payments, the Defense Finance and Accounting Service provided payments made during the April through June 2000 quarter. For CMS Medicare vendor payments, Medicare contractors provided payments made during the April through June 2000 quarter. For payments that matched on both taxpayer identification number (TIN) and name, we calculated either 15 percent of the payment or the actual amount of tax owed if it was less than 15 percent of the payment to determine the amount that could be levied. All estimates of the delinquent taxes that might be recovered throughout this report have been annualized. Although some taxpayers might take actions to avoid a continuous levy, we believe our estimates of the tax debt that might be recovered are understated because we did not receive data for over 50 percent of Medicare payments made during the April through June 2000 quarter. In addition, we were unable to match about $3.4 billion in DOD vendor payments against IRS’ accounts receivable data because the payment records did not contain a TIN. Based on our prior work involving the continuous levy program, we were aware that problems with information contained in vendor payment records could make such records unsuitable for matching against IRS’ accounts receivable file, thus reducing the amount of tax debt that might be recovered. To identify additional debt that could be collected if problems with vendor payment records were corrected, we analyzed agency payment records to identify instances of a missing or inconsistent payee TIN or name. We selected all instances in which the TIN in the payment records matched the TIN in IRS’ accounts receivable records, but the name in the payment records did not match the name in IRS’ records. For these instances, we then reviewed IRS’ records to determine whether it had additional information to indicate that the payee was in fact the delinquent taxpayer in question. To determine whether USPS, DOD, and CMS payments could be included in the continuous levy program and the timeframes for doing so, we interviewed IRS officials responsible for the continuous levy program. We also interviewed Financial Management Service (FMS) officials involved in recent discussions with various agencies in an attempt to include non- Treasury disbursed payments in the Treasury Offset Program. In addition, we interviewed officials from USPS, DOD, and CMS as well as selected Medicare contractors responsible for processing the various types of payments. To identify actions that could be taken to enhance IRS’ ability to manually levy federal payments from delinquent individuals and businesses that are not included in the continuous levy program, we discussed this issue with IRS officials and officials from USPS, DOD, and CMS. We identified various agency databases that could be used to provide IRS with updated vendor payment sources. We also discussed IRS’ current levy procedures with IRS officials, and reviewed the related tax law governing these procedures. We did our work at IRS, FMS, and USPS headquarters in Washington, D.C.; DOD headquarters in Arlington, VA; CMS headquarters in Baltimore, MD; Defense Finance and Accounting Service Centers in Columbus and Cleveland, OH, and Denver, CO; Defense Manpower Data Center in Seaside, CA; and the CMS Regional Office in San Francisco, CA. We also interviewed Medicare contractors located in Alabama, California, Florida, Maryland, New York, North Dakota, Pennsylvania, Texas, and Wisconsin. The following are GAO’s comments on the Internal Revenue Service’s letter dated July 16, 2001. 1. In response to IRS’ concern that our text may have given the impression that IRS does not levy any federal payments that are not subject to the continuous levy program, we modified footnote 3 to recognize that IRS does levy such payments under its general levy authority. 2. IRS’ suggested change has been incorporated into the text. 3. IRS’ suggested change is included in footnote 10. 4. We deleted “indirect” from our text. While it is debatable whether the benefit would be direct or indirect, levy notices do sometimes result in taxpayers making other arrangements to resolve their tax liability. 5. IRS’ suggested change has been incorporated into the text. 6. In response to IRS’ concern with our use of the term “disparate treatment” of taxpayers in our conclusions, we have revised our text to state that whether or not taxpayers are included in the continuous levy program is predicated in part on whether their federal payments are made by FMS or directly by other agencies. We believe that this results in unequal treatment of delinquent taxpayers who receive federal payments and that this will only be corrected when more types of federal payments are available to the program. In addition to those named above, Wendy Ahmed, Tom N. Bloom, Robert C. McKay, Ellen Rominger, James J. Ungvarsky, and Elwood D. White made key contributions to this report.
The Internal Revenue Service (IRS) seeks to apply the law fairly to all taxpayers. Under the continuous levy program, however, taxpayers who receive federal payments are treated differently depending on whether the payments are made by the Federal Management Service (FMS) on behalf of other agencies or directly by the agencies themselves. Delinquent taxpayers receiving payments from FMS generally are subject to continuous levy, while those receiving payments directly from federal agencies are not. Although it may prove impractical to treat similarly all delinquent taxpayers who receive federal payments, progress--and substantial additional revenues--could be achieved in this area. FMS plans to include salaries at the U.S. Postal Service and salaries and retirement payments at the Defense Department (DOD) in the continuous levy program. There are similar plans to include all vendor payments from the Postal Service, DOD, and the Centers for Medicare and Medicaid Services. Discussions among FMS, IRS, and the agencies could ensure that all of these payments all included in the continuous levy program as soon as possible. These discussions could also speed the inclusion of some categories of vendor payments. The continuous levy program could also benefit if IRS were to begin sharing with FMS the different names that businesses use for tax purposes--an approach that IRS already uses for individual taxpayers in the program. In the meantime, until the program is expanded to include more direct payments from agencies, IRS could take steps to ensure that all delinquent taxpayers receiving payments are subject to potential collection activity. DOD and CMS have data on hand that they could share with IRS to strengthen IRS' ability to identify taxpayers whose federal payments could be levied under the program.
Top leadership in agencies across the federal government must provide committed and inspired attention needed to address human capital and related organizational transformation issues. Leaders must not only embrace reform, they must integrate the human capital function into their agencies’ core planning and business activities. Senior executive leadership is especially key today as the federal government faces significant efforts to transform to address key challenges. OPM’s 2008 Federal Human Capital Survey results showed that the government needs to establish a more effective leadership corps. Specifically, of the employees responding to the survey, a little over half reported a high level of respect for their senior leaders and a little less than half are satisfied with the information they receive from management on what is going on in the organization. The percentage of positive results for these questions has increased slightly since the last survey was conducted in 2006. OPM plays a key role in fostering and guiding improvements in all areas of strategic human capital management in the executive branch. As part of its key leadership role, OPM can assist in—and, as appropriate, require—the building of infrastructures within agencies needed to successfully implement and sustain human capital reforms and related initiatives. OPM can do this in part by encouraging continuous improvement and providing appropriate assistance to support agencies’ efforts in areas such as acquiring, developing, and retaining talent. We have reported that OPM has made commendable efforts in transforming itself from less of a rule maker, enforcer, and independent agent to more of a consultant, toolmaker, and strategic partner in leading and supporting executive agencies’ human capital management systems. However, OPM has faced challenges in its internal capacity to assist and guide agencies’ readiness to implement change, such as the certification process for the senior executive performance-based pay system, and will need to address these challenges. Specifically, in October 2007, we reported that OPM has strategies in place, such as workforce and succession management plans, that are aligned with selected leading practices relevant to the agency’s capacity to fulfill its strategic goals. However, at the time, OPM lacked a well-documented agencywide evaluation process of some of its workforce planning efforts. In response to our recommendation, OPM recently developed an automated tracking system to monitor training so that agency officials could target it on priority areas. OPM also faces challenges in modernizing the paper-intensive processes and antiquated information systems it uses to support the retirement of civilian federal employees through the retirement modernization program. This modernization program is important because OPM estimates a growing volume of retirement processing over the next several years given projected retirement trends. In January 2008, we reported that the agency’s management of this initiative in areas that are important to successful deployment of new systems had not ensured that components would perform as intended. For example, at that point in time, OPM had not addressed weaknesses in its approaches to testing system components and managing system defects to ensure that the new system components will perform as intended. In addition, OPM had yet to develop a reliable program cost estimate and the measurement baseline against which program progress can be determined. To date, the agency continues to have retirement modernization planning and management shortcomings that need to be addressed. The results of our most recent review of the modernization program are expected to be released by the end of April 2009. To help support federal agencies with expanded responsibilities under the Recovery Act, OPM has provided information, tools, and training to federal agencies to help address these new human capital challenges and ensure that agencies acquire the talent they need. For example, in March 2009, OPM held an interagency forum on approaches to meet the Recovery Act’s human capital management support requirements. At that event, OPM provided information on the various human capital flexibilities available to agencies for hiring the necessary employees, such as 30-day emergency appointments, and on how OPM can provide assistance. In addition, OPM has begun facilitating coordination with the Federal Executive Boards across the nation to share agency plans and activities for the Recovery Act implementation. Areas of coordination include shared approaches to filling human capital needs and ensuring coordination of agency programs to avoid duplication. Congress also recognized that increased attention to strategic human capital management was needed in federal agencies. In 2002, Congress created the chief human capital officer (CHCO) position in 24 agencies to advise and assist the head of the agency and other agency officials in their strategic human capital management efforts. The CHCO Council— chaired by the OPM Director—advises and coordinates the activities of members’ agencies, OPM, and the Office of Management and Budget (OMB) on such matters as the modernization of human resources systems, improved quality of human resources information, and legislation affecting human resources operations and organizations. The council, which has been in operation for nearly 6 years, has organized itself to address key current and emerging human capital issues. For example, in its fiscal year 2008 annual report to Congress, the council identified three emerging issues: (1) managing the public expectations of the federal response to highly complex issues, (2) building and sustaining federal employee leadership, and (3) transforming the human resources profession to meet challenges. Its subcommittee structure is intended to align with the overarching strategic human capital initiatives affecting the federal government and includes subcommittees on hiring and succession planning, the human capital workforce, and human resources line of business. OPM works with the CHCO Council to develop and disseminate human capital guidance and relies upon the council members to communicate OPM policy and other human capital information throughout their agencies. For example, we recently reported that inquiries from the council about how to request a waiver to rehire annuitants without reducing their salaries led OPM officials to develop a template for agencies to use in submitting these requests. OPM officials see their relationship with the council and the agencies it represents as a partnership and shared responsibility to ensure that the latest guidance and practices are disseminated throughout the agencies. In addition to the council meetings, the CHCO Council Training Academy is a forum for CHCOs and other agency officials to discuss human capital issues and share best practices. OPM has invited all levels of agency officials—not just CHCOs—to attend the academy sessions when relevant topics were featured. For example, over the last 2 years, the council has held several academy sessions related to Senior Executive Service (SES) performance management and pay systems and lessons learned from the governmentwide SES survey results. Strategic human capital planning that is integrated with broader organizational strategic planning is critical to ensuring that agencies have the talent and skill mix they need to address their current and emerging human capital challenges, especially as the federal government faces a retirement wave. Agencies must determine the critical skills and competencies necessary to achieve programmatic goals and develop strategies that are tailored to address any identified gaps. Further, agencies are to develop strategic human capital plans with goals, objectives, and measures and report their progress toward these goals and objectives in annual reports to OPM as required by OPM’s Human Capital Assessment and Accountability Framework. We have found that leading organizations go beyond a succession planning approach that focuses on simply replacing individuals and instead engage in broad, integrated succession planning and management efforts that focus on strengthening both current and future organizational capacity to obtain or develop the knowledge, skills, and abilities they need to carry out their missions. For example, we recently reported on the Social Security Administration’s (SSA) use of information technology in projecting future retirements and identifying the necessary steps to fill these gaps. Specifically, SSA developed a complex statistical model that uses historical data to project who is likely to retire, and SSA uses these projections to estimate gaps in mission-critical positions and to identify what components of the agency could be most affected by the upcoming retirements. With these estimates, the agency develops action plans focused on hiring, retention, and staff development. As a result of using these models, SSA has developed targeted recruitment efforts that extend to a broad pool of candidates. To create this pool, SSA is also beginning to reach out to older workers in order to achieve one of its diversity goals— attracting a multigenerational workforce—by developing recruiting material featuring images of older and younger workers and offering a phased retirement program, among other things. An example of the federal government’s strategic human capital planning challenges involves its acquisition workforce. In 2007, we testified that much of the acquisition workforce’s workload and complexity of responsibilities have been increasing without adequate attention to the workforce’s size, skills and knowledge, and succession planning. Over the years, a strategic approach had not been taken across government or within agencies to focus on workforce challenges, such as creating a positive image essential to successfully recruit and retain a new generation of talented acquisition professionals. In addition, we recently reported that the Department of Defense (DOD) lacks critical departmentwide information to ensure its acquisition workforce is sufficient to meet its national security mission. As a result, we made several recommendations to DOD aimed at improving DOD’s management and oversight of its acquisition workforce, including the collection of data on contractor personnel. The challenges agencies are facing with managing acquisitions, including sustaining a capable and accountable acquisition workforce, contributed to GAO’s designation of the management and use of interagency contracting as a governmentwide high-risk area in 2005. Further, in our most recent high-risk update, acquisition and contract management remains a high-risk area at three agencies—DOD, the Department of Energy, and the National Aeronautics and Space Administration (NASA)—as does DOD’s weapon system acquisition. Addressing these challenges will require sustained management attention and leadership at both the agency level and from organizations such as OMB and its Office of Federal Procurement Policy. In May 2008, we reported that the Centers for Disease Control and Prevention (CDC) had made improvements in its strategic human capital planning, but the agency should take a more strategic view of its contractor workforce—more than one-third of its workforce. For example, CDC conducted a preliminary workforce analysis to determine the skills and competencies needed to achieve the agency’s mission and goals, including identifying skill and competency gaps. While the agency had not completed its analyses of skill and competency gaps for the occupations it deemed most critical when the strategic human capital management plan was developed, at the time of our report, the agency was completing these analyses. CDC’s strategic human capital management plan did not address the challenge of managing a blended workforce with a large percentage of contractors working with federal staff. We reported that without addressing this challenge CDC’s plan would not give the agency a strategic view of its governmental and contractor workforce and thus might not be as useful as it could be in assisting the agency with strategic human capital planning for its entire workforce. In response to our recommendation to address this challenge in its plan, CDC’s most recent update to its strategic human capital management plan includes an effort to develop, implement, and evaluate strategies to address management of contractors as part of a blended workforce. Faced with a workforce that is becoming more retirement eligible and the need for a different mix of knowledge, skills, and competencies, it is important that agencies strengthen their efforts and use of available flexibilities from Congress and OPM to acquire, develop, motivate, and retain talent. For years it has been widely recognized that the federal hiring process all too often does not meet the needs of (1) agencies in achieving their missions; (2) managers in filling positions with the right talent; and (3) applicants for a timely, efficient, transparent, and merit- based process. In short, the federal hiring process is often an impediment to the very customers it is designed to serve in that it makes it difficult for agencies and managers to obtain the right people with the right skills, and applicants can be dissuaded from public service because of the complex and lengthy procedures. In recent years, Congress and OPM have taken a series of important actions to improve recruiting and hiring in the federal sector. For example, Congress has provided agencies with enhanced authority to pay recruitment bonuses and with the authority to credit relevant private sector experience when computing annual leave amounts. In addition, Congress has provided agencies with hiring flexibilities that (1) permit agencies to appoint individuals to positions through a streamlined hiring process where there is a severe shortage of qualified candidates or a critical hiring need, and (2) allow agency managers more latitude in selecting among qualified candidates through category rating. As the federal government’s central personnel management agency, OPM has a key role in helping agencies acquire, develop, retain, and manage their human capital. In the areas of recruiting and hiring, OPM has, for example, done the following. Authorized governmentwide direct-hire authority for veterinarian medical officer positions given the severe shortage of candidates for these positions. Recently, we reported that despite a growing shortage of veterinarians, the federal government does not have a comprehensive understanding of the sufficiency of its veterinarian workforce for routine program activities. In response to our findings, OPM granted direct-hire authority for these positions governmentwide. Launched an 80-day hiring model to help speed up the hiring process, issued guidance on the use of hiring authorities and flexibilities, and developed a Hiring Tool Kit to assist agency officials in determining the appropriate hiring flexibilities to use given their specific situations. Established standardized vacancy announcement templates for common occupations, such as secretarial, accounting, and accounting technician positions, in which agencies can insert summary information concerning their specific jobs prior to posting for public announcement. Developed a guide called Career Patterns that is intended to help agencies recruit a diverse, multigenerational workforce. This guide presents career pattern scenarios that characterize segments of the general labor market according to career-related factors, such as commitment to a mission and experience, and lists characteristics of the work environment that some cohorts may find particularly attractive and related human capital policies that agencies could use to recruit and retain potential employees. Updated and expanded its report Human Resources Flexibilities and Authorities in the Federal Government, which serves as a handbook for agencies in identifying current flexibilities and authorities and how they can be used to address human capital challenges. Individual federal agencies have also taken actions to meet their specific needs for acquiring the necessary talent, while other agencies have faced difficulties. For example, NASA has used a combination of techniques to recruit workers with critical skills, including targeted recruitment activities, educational outreach programs, improved compensation and benefits packages, professional development programs, and streamlined hiring authorities. Many of NASA’s external hires have been for entry- level positions through the Cooperative Education Program, which provides NASA centers with the opportunity to develop and train future employees and assess the abilities of potential employees before making them permanent job offers. Further, the Nuclear Regulatory Commission (NRC) has endeavored to align its human capital planning framework with its strategic goals and identified the activities needed to achieve a diverse, skilled workforce and an infrastructure that supports the agency’s mission and goals. NRC has used various flexibilities in recruiting and hiring new employees, and it has tracked the frequency and cost associated with the use of some flexibilities. While there was room for further improvement, NRC has been effective in recruiting, developing, and retaining a critically skilled workforce. We have reported in recent years on a number of human capital issues that have hampered the Department of State’s (State) ability to carry out U.S. foreign policy priorities and objectives, particularly at posts central to the war on terror. In August 2007, we testified that State has made progress in addressing staffing shortages over the last few years, but it remains a problem. To help address the shortages, State has implemented various incentives particularly at critical hardship posts, including offering extra pay to officers who serve an additional year at these posts and allowing employees to negotiate shorter tours of duty. Further, State has made progress in increasing its foreign language capabilities, but significant language gaps remain. In response to our recommendations to enhance the language proficiency of State’s staff, officials told us that the department has placed an increased focus on language training in critical areas. State has also implemented a new initiative that would provide additional pay incentives for staff if they chose to be reassigned to use existing Arabic language skills. The Partnership for Public Service (Partnership) recently reported that governmentwide, agencies were not using the student intern hiring flexibility to the full extent possible. Governmentwide, agencies have the authority to hire student interns through the Student Career Experience Program with the option of a noncompetitive conversion to the competitive service upon a student’s satisfactory completion of diploma, degree, or certificate of program requirements and work experience. In its recent interagency forum on human capital management under the Recovery Act, OPM highlighted this hiring flexibility as a useful tool for bringing potential employees on board. The Partnership found that about 7 percent of student interns employed by federal agencies in 2007 were hired into permanent jobs. The Partnership suggested that the federal government should, among other things, prioritize student internships as key talent sources for entry-level jobs and then recruit accordingly and provide adequate resource to these programs; and collect data enabling a clear evaluation of all intern programs and ensure that agencies are making the best use of their authority to build their critical workforce pipelines. Further, agencies have a variety of options to tap older, experienced workers to fill workforce needs, including retaining workers past initial retirement eligibility, hiring new older workers, and bringing back retired federal annuitants. Recently, we reported on selected federal agencies’ approaches to using older workers to address future critical gaps in leadership, skills, and institutional knowledge. For example, the United States Agency for International Development tends to bring back its retirees, many of whom have specialized knowledge and skills, as contractors to fill short-term job assignments and to help train and develop the agency’s growing number of newly hired staff. As for retention, in many ways, the federal government is well positioned to retain the people it needs to carry out its diverse roles and responsibilities. Importantly, federal employment offers rewards, such as interesting work and opportunities to make a difference in the lives of others, as well as a variety of tangible benefits and work-life flexibilities that make an organization an employer of choice. We have stated that agencies need to reexamine the flexibilities provided to them under current authorities—such as monetary recruitment and retention incentives; special hiring authorities, including student employment programs; and work-life programs, including alternative work schedules, child care assistance, telework opportunities, and transit subsidies—and identify those that could be used more extensively or more effectively to meet their workforce needs. In using telework and other flexibilities, it is important for agencies to have clear goals so that they can assess their programs and develop and implement changes necessary to improve their success. We have found instances where agency officials cited their telework programs as yielding positive work-life and other benefits. For example, according to U.S. Patent and Trademark Office (USPTO) management officials, one of the three most effective retention incentives and flexibilities is the opportunity to work from remote locations. In fiscal year 2006, approximately 20 percent of patent examiners participated in the agency’s telework program, which allows patent examiners to conduct some or all of their work away from their official duty station 1 or more days per week. In addition, USPTO reported in June 2007 that approximately 910 patent examiners relinquished their office space to work from home 4 days per week. The agency believes its decision to incorporate telework as a corporate business strategy and for human capital flexibility will help recruitment and retention of its workforce, reduce traffic congestion in the national capital region, and, in a very competitive job market, enable USPTO to hire approximately 6,000 new patent examiners over the next 5 years. Leading organizations have found that to successfully transform themselves they must often fundamentally change their cultures so that they are more results-oriented, customer-focused, and collaborative in nature. An effective performance management system is critical to achieving this cultural transformation. Having a performance management system that creates a “line of sight” showing how unit and individual performance can contribute to overall organizational goals helps individuals understand the connection between their daily activities and the organization’s success. Similarly, in its September 2008 report on employee engagement, the Merit Systems Protection Board recommended that managers establish a clear line of employee-to-agency sight as a means to increase employee engagement, recognizing that employees are more engaged if they find more meaning in their work. The federal government’s senior executives need to lead the way in transforming their agencies’ cultures. Credible performance management systems that align individual, team, and unit performance with organizational results can help manage and direct this process. The performance-based pay system for members of the SES, which seeks to provide a clear and direct linkage between individual performance and organizational results as well as pay, is an important step in governmentwide transformation. In November 2008, we reported that selected agencies had designed their SES performance appraisal systems to address OPM’s and OMB’s certification requirements of aligning individual performance expectations with organizational goals and factoring organizational performance into senior executive performance appraisal decisions. For example, in setting expectations for individual performance plans, the Department of Energy requires senior executives and supervisors to identify key performance requirements with metrics that the executive must accomplish in order for the agency to achieve its strategic goals. Weighted at 60 percent of the summary rating, the performance requirements are to be specific to the executive’s position and described in terms of specific results with clear, credible measures (e.g., quality, quantity, timeliness, cost-effectiveness) of performance, rather than activities. For each performance requirement, the executive is to identify the applicable strategic goal in the performance plan. While many agencies across the government are doing a good job overall of aligning executive performance plans with agency mission and goals, according to OPM, some of the plans do not fully identify the measures used to determine whether the executive is achieving the necessary results, which can affect the executive’s overall performance appraisal. This challenge of explicitly linking senior executive expectations to results-oriented organizational goals is consistent with findings from our past work on performance management. In addition to promoting high performance and accountability to foster results-oriented cultures, leading organizations develop and maintain inclusive and diverse workforces that reflect all segments of society. Such organizations typically foster a work environment in which people are enabled and motivated to contribute to continuous learning and improvement as well as mission accomplishment and provide both accountability and fairness for all employees. As with any organizational change effort, having a diverse top leadership corps is an organizational strength that can bring a wider variety of perspectives and approaches to bear on policy development and implementation, strategic planning, problem solving, and decision making. We recently reported on the diversity of the SES and the SES developmental pool, from which most SES candidates are selected, noting that the representation of women and minorities in the SES increased governmentwide from October 2000 through September 2007, but increases did not occur in all major executive branch agencies. In helping to ensure diversity in the pipeline for appointments to the SES as well as recruitment at all levels, it is important that agencies have strategies to identify and develop a diverse pool of talent for selecting the agencies’ potential future leaders and to reach out to a diverse pool of talent when recruiting. For example, to recruit diverse applicants, agencies will need to consider active recruitment strategies such as widening the selection of schools from which to recruit, building formal relationships with targeted schools to ensure the cultivation of talent for future applicant pools, and partnering with multicultural organizations to communicate their commitment to diversity and to build, strengthen, and maintain relationships. We reported, for example, that NASA developed a strategy for recruiting Hispanics that focuses on increasing educational attainment, beginning in kindergarten and continuing into college and graduate school, with the goal of attracting students into the NASA workforce and aerospace community. NASA said it must compete with the private sector for the pool of Hispanics qualified for aerospace engineering positions, which is often attracted to more lucrative employment opportunities in the private sector in more preferable locations. NASA centers sponsored, and its employees participated in, mentoring, tutoring, and other programs to encourage Hispanic and other students to pursue careers in science, engineering, technology, and mathematics. Mr. Chairman and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions you or others may have at this time. For further information regarding this statement, please contact Yvonne D. Jones, Director, Strategic Issues, at (202) 512-6806 or jonesy@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Belva Martin, Assistant Director; Karin Fangman; Janice Latimer; and Jessica Thomsen. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 2001, GAO identified human capital management as a governmentwide high-risk area because federal agencies lacked a strategic approach to human capital management that integrated human capital efforts with their missions and program goals. Progress has been made. However, the area remains high-risk because of a continuing need for a governmentwide framework to advance human capital reform. The importance of a top-notch federal workforce cannot be overstated. The federal government is facing new and growing challenges coupled with a retirement wave and the loss of leadership and institutional knowledge at all levels. The issues facing agencies are complex and require a broad range of technical skills that are also highly sought after by the private sector. This testimony, based on a large body of completed work issued from January 2001 through March 2009, focuses on executive branch agencies' and the Office of Personnel Management's (OPM) progress in addressing strategic human capital management challenges in four key areas: (1) leadership; (2) strategic human capital planning; (3) acquiring, developing, and retaining talent; and (4) results-oriented organizational culture. In prior reports, GAO has made a range of recommendations to OPM and agencies in the four areas. GAO is reporting on progress in addressing these recommendations and is making no new recommendations. Congress, executive branch agencies, and OPM have taken action to reform federal human capital management, but federal agencies are facing new challenges. The recent need to quickly hire staff to carry out and oversee the Troubled Asset Relief Program and expanded agency responsibilities under the American Recovery and Reinvestment Act of 2009 point to the need for sustained attention to help ensure that agencies have the right people with the right skills to meet new challenges. Top leadership in agencies across the federal government must provide committed and inspired attention needed to address human capital and related organizational transformation issues. OPM has made strides in transforming itself as a strategic partner to help lead human capital reform efforts. For example, at the agency level, OPM works with the Chief Human Capital Officers council to develop and disseminate human capital guidance and relies upon the council members to communicate OPM policy and other human capital information throughout their agencies. Integrating succession planning and management efforts that focus on strengthening both current and future organizational capacity to obtain or develop the knowledge, skills, and abilities agencies need to meet their missions continues to be important. For example, GAO has reported on a challenge in the acquisition workforce where the workload and complexity of responsibilities have been increasing without adequate attention to the workforce's size, skills and knowledge, and succession planning. Faced with a workforce that is becoming more retirement eligible and the need for a different mix of knowledge, skills, and competencies, it is important that agencies strengthen their efforts and use available flexibilities. Agencies have developed strategies to recruit needed talent, including turning to older experienced workers to fill knowledge and skills gaps. For example, the National Aeronautics and Space Administration has used a combination of techniques to recruit workers with critical skills, including targeted recruitment activities, educational outreach programs, improved compensation and benefits packages, and streamlined hiring authorities. In addition to promoting high performance and accountability to foster results-oriented cultures, it is important for agencies to develop and maintain inclusive and diverse workforces that reflect all segments of society. Agencies can benefit from strategies that offer a diverse pool of talent for selecting the agencies' future leaders and recruiting new employees so that agencies can get a wider variety of perspectives and approaches.
The State Department has security standards for U.S. diplomatic facilities to protect employees and property. A key legal requirement is that new embassy and consulate buildings have a setback of 100 feet from the exterior wall of the building to the perimeter wall or fence to provide protection against blast and for other reasons. To provide adequate security, State has determined that it needs to replace most existing facilities that do not meet these standards. Although the Department of State has made substantial security enhancements at U.S. embassies and consulates, including acquiring adjacent properties to increase setback, building concrete barriers, and installing electronic cameras and sensors, these enhancements cannot bring most existing facilities in line with the desired setback and related blast protection requirements because the facilities are on small pieces of land. Larger building sites, which are expensive and frequently hard to acquire in urban areas, are typically needed to construct new facilities with sufficient setback to protect against attacks. After the Africa bombings, the administration established two Accountability Review Boards, pursuant to the Omnibus Diplomatic Security and Anti-Terrorism Act of 1986 (P. L. 99-399, 22 U.S.C. 4831, et seq., as amended), to review the circumstances surrounding the bombings and State’s vulnerability to terrorist threats. In January 1999, the boards made recommendations on responding to terrorist threats, strengthening security standards and procedures, determining the size and composition of U.S. missions, and providing funding for safe buildings and security programs in the future. The boards recommended that State receive $1 billion annually over a 10-year period to construct new, secure facilities. The State Department’s Office of Foreign Buildings Operations is responsible for managing construction of embassies and consulates as well as performing construction-related security upgrades and maintaining overseas properties valued at over $12 billion. Throughout the 1990s, this Office typically received about $400 million yearly to carry out its routine, real property functions. In this time period, the Office also managed the completion of the construction of more than 20 new embassy and consulate facilities, most of which were authorized and funded under the Omnibus Diplomatic Security and Anti-Terrorism Act of 1986 (commonly called the “Inman Program”). In fiscal years 1999, 2000, and 2001, State received a total of about $1.1 billion to build new, secure embassies and consulates. State had also requested an advance appropriation of $3.35 billion for fiscal years 2002- 2005 to ensure a steady stream of funding for an expanded construction program, but this was not approved by the Congress. The Conference Committee Reports on State’s appropriations for embassy construction have directed State to submit a plan on how it plans to use the money. The reports also direct State to receive approval of the appropriations committees before an obligation or expenditure of funds for capital and rehabilitation projects. If issues arise with a particular proposal or project, State pursues a number of options with the Committees, including making adjustments to the planned project and/or reprogramming of appropriated funds for other projects. Appendix I provides greater detail on State’s budget requests and related Committee actions. Following the 1998 embassy bombings, State placed a priority on 10 projects. These projects were for five posts in Africa—Kampala, Uganda; Dar es Salaam, Tanzania; Nairobi, Kenya; Tunis, Tunisia; and Luanda, Angola—and five posts in other regions—Doha, Qatar; Istanbul, Turkey; São Paulo and Rio de Janeiro, Brazil; and Zagreb, Croatia. Seven of the 10 projects are moving forward, and 3 are on hold. State also began to identify sites and perform preliminary planning for projects at more than 30 other posts. (See app. I.) Table 1 shows scheduled occupancy dates, planned number of personnel at the post, and estimated cost for State’s first 10 priority projects, as of November 2000. Appendix II provides a detailed description of the status of the 10 priority projects. The estimated costs for these projects ranged between $22 million and about $100 million. According to State, these costs are much higher than for a typical commercial office building of similar size in the United States due to several factors, including basic structural requirements, the unique access control and security requirements, and the difficulties associated with construction in developing countries. Appendix III provides more information about some of the factors contributing to the costs for constructing diplomatic facilities. Seven of the 10 priority projects are progressing and are in the construction phase. The first project scheduled for completion is in Kampala. State signed a design/build contract for this project with a major U.S. contractor in September 1999 and move-in is scheduled for January 2001. Because of security concerns at the post, the Kampala project was “fast-tracked” on the most compressed schedule ever attempted by State. Construction is scheduled to be completed in about 15 months, which is about half the time normally required for the construction phase of a project of similar size. Figure 1 illustrates the status of construction. The second project scheduled for completion is in Doha, Qatar. State has leased an unfinished building in Doha to replace its current embassy. Construction by a local firm began in August 1999 to retrofit the building to meet embassy requirements, and the move-in is scheduled for April 2001. In Nairobi and Dar es Salaam, the contractor is preparing the site for construction (site mobilization). In September 1999, State hired a major U.S. construction firm to design and build these embassies. Construction contracts have been signed for the projects in Zagreb, Tunis, and Istanbul. As of November 2000, State’s site acquisition/design proposal for a new consulate in Rio de Janeiro, its design proposal for a new consulate in São Paulo, and its construction proposal for a new embassy in Luanda, were on hold. According to State officials, State’s site acquisition proposal for Rio de Janeiro encountered congressional concerns that State had not adequately considered options for reducing post size by (1) combining some post functions in regional operations and (2) right-sizing consular operations in Brazil, which include major operations in both Rio de Janeiro and São Paulo. In addition to the existing embassy in Brasilia, Brazil, State proposed to spend about $200 million to build two new consulates—one in Rio de Janeiro to accommodate about 145 personnel and one in São Paulo to accommodate 97 staff. The congressional concerns are consistent with overseas staffing issues identified in our prior work. For several years, we have encouraged the executive branch to rethink its overseas presence with a view to right-sizing posts and conducting regional operations where feasible. Because the Congress has not approved State’s proposal for the Rio de Janeiro project, State has notified the Congress of its intent to reprogram $22.8 million from the fiscal year 1999 supplemental appropriation for this project to partially fund the project in Abu Dhabi, United Arab Emirates. In September 2000, State signed a design/build construction contract for a new embassy in Abu Dhabi. For the project in São Paulo, site acquisition has been approved but according to State officials, the Senate Appropriations Committee has not approved State’s plan to demolish buildings on the site. The Committee suggested that State attempt to incorporate the existing buildings into its overall construction design. State does not believe that it would be feasible to bring the existing buildings up to security standards and plans to discuss other options with the Committee. The proposed construction project in Luanda is not moving forward because of congressional concerns that it does not meet the 100-foot setback requirement. State proposed building a facility with a 65-foot setback, which is 35 feet less than required by State’s current security standards. The Secretary of State granted a waiver from the security standards based on State’s plan to design the building to meet blast standards at the lesser distance. According to State, potential blast effects on the planned facility would be mitigated by strengthened construction methods and techniques, providing equal security performance to that of a standard blast-resistant building with a 100-foot setback, thus providing the same level of protection. State indicated that the alternative sites it had identified that would meet the setback requirements did not have a secure title due to uncertainties regarding land ownership in Luanda. According to State officials, as of November 2000, the Congress had not agreed to provide security appropriations for the Department’s Luanda project. State envisions a long-term program, but it has not prepared a long-term capital construction plan for facility replacement that identifies the estimated cost and construction schedules for planned projects, as well as projected annual funding requirements for the overall program. According to State officials, it is difficult to accurately estimate long-term construction costs and schedules and they cited changing staffing needs and space requirements as one of the primary reasons. State also indicated that construction schedules will depend on the level of funding provided by the Congress. Industry and local government leaders use long-term capital plans as management and oversight tools even when the plans are based on preliminary assumptions and estimates. Those estimates and assumptions are typically revised and refined as information becomes available, further enhancing the decision-making process. State has ranked the more than 180 facilities that it proposes to replace and/or provide with major security enhancements into groups or “bands” of 20 in order from the most vulnerable to the least vulnerable to terrorist attack. The ranking, which is provided to the Congress annually, is intended to serve as a guide for which embassies and consulates State would replace first. In addition to its 10 priority projects, State’s planned uses of funds appropriated and/or requested for fiscal years 1999-2001 included initial project planning, site identification, and/or site acquisition stages for potential construction projects at more than 30 posts. State had also requested an advance appropriation of $3.35 billion as part of the fiscal year 2001 budget to continue the replacement program in fiscal years 2002 through 2005. In its budget request, State did not identify specific projects, or their potential costs and replacement schedules, for the requested advance appropriation. According to State officials, although the request did not identify specific projects, costs, and schedules, they had intended to use the funds to address projects primarily in the first three bands. However, State officials said that some of State’s initial site acquisition proposals at these posts have encountered congressional opposition. For example, the Senate Appropriations Committee approval was denied in April and later in September 2000 for the acquisition of proposed sites in Antananarivo, Madagascar; Bamako, Mali; Bujumbura, Burundi; Karachi, Pakistan; and Sarajevo, Bosnia-Herzegovina. According to State officials, issues that have led to difficulties in obtaining Committee approval included questions about the location of proposed sites and the priority of projects. State indicated that it is working to resolve these issues and that it hopes to receive congressional support so it can move forward on proposals and will remain able to acquire its preferred sites in the future. State’s planning for the program focuses on its ranking of banded projects along with a more detailed budget submission for the upcoming fiscal year. State does not clearly indicate the order that projects will be done; identify estimated costs for critical project elements, such as site acquisition and construction; or indicate project completion schedules beyond the upcoming fiscal year. State officials questioned the value of preparing and presenting a longer- term, more detailed plan at this time largely because of uncertainties involving future funding and the limited availability of acceptable sites. They also cited uncertainties about estimating project costs early in the program cycle; the difficulties sometimes encountered with other “tenant” agencies in planning their personnel and space requirements in new embassies; and the risks associated with working in overseas environments. While we agree such factors affect programs, their existence dictates the need for sound planning to ensure program objectives are met in the most effective and efficient manner. The advantages of long-term planning have been endorsed by industry and local government leaders as an effective management tool for controlling costs and making more effective decisions. In our December 1998 Executive Guide on Capital Decision-Making, we reported that leading private sector and local state organizations not only rank their future capital projects based on applicable criteria, but they also prepare long-term capital plans based on preliminary assumptions and estimates to identify specific planned projects, plan for resource use over the long term, and establish priorities for implementation. These plans usually cover 5-, 6-, or 10-year periods and are updated either annually or biennially. Industry and state government leaders have also found that long-term plans help control capital costs. Developing long-term capital plans also enables these organizations to review and refine a proposed project’s scope and cost estimates over several years, which helps reduce cost overruns. For example, one medium-sized state government we have studied prepares a 5-year capital plan that assists the government in refining the scope and cost estimate of individual project requests. An annual review of capital project proposals in the plan allows the state budget office to determine if a project continues to meet the goals and objectives outlined by the agencies. State governmental officials believe that this up-front planning and continuous reviews are key factors in why the state has limited cost overruns and few surprises once project funding is approved. While the cost estimates contained in long-term capital plans are preliminary, they provide decisionmakers with an overall sense of a project’s funding needs. Moreover, the Office of Management of Budget encourages federal agencies to develop long-term agency capital plans as part of their capital planning process. Our prior work at the General Services Administration has shown that long-term strategic planning for federal courthouse construction is critical to helping congressional decisionmakers compare and evaluate the merits of project proposals and priorities and to providing a rationale for providing resources to the highest priority projects. Moreover, in a recent prepared statement for the Congress, Admiral Crowe, Chairman of the Accountability Review Boards set up to investigate the embassy bombings in Africa, supported the formulation of a long-term capital plan for embassy construction in view of the threats staff face at overseas embassies and consulates. While State has expressed a reluctance to prepare a long-term plan for embassy and consulate replacements, it is conducting a series of studies that could provide valuable inputs into the preparation of a long-term plan that would strengthen the overall management process. These studies represent a significant part of State’s efforts to determine future resource and funding needs of the program. In May 2000, the Office of Foreign Buildings Operations initiated several studies. One study is underway to identify alternative construction schedules for the life of the program based on preliminary cost and funding assumptions. Preliminary results of the study have been submitted to Office management, but completion dates for the study have not been set. A second study is assessing potential industry bottlenecks that could affect construction. Potential problems to be addressed include availability of appropriately cleared U.S. labor; construction materials; and unique security materials, such as glazing for windows and forced entry- and ballistic-resistant doors. The Office of Foreign Buildings Operations expects that the study will be completed in fiscal year 2001. A third study is determining what additional staffing and contractor resources may be necessary to implement and manage the program. The Office of Foreign Buildings Operations has had a staffing increase since the Africa bombings, but its officials indicated that additional staffing or contracting resources to manage the construction program may be required. Although no date has been set for completion of this study, the Office expects that preliminary results will be available in the third quarter of fiscal year 2001. The State Department is also studying the size and deployment of the U.S. overseas presence, which are key factors affecting construction requirements and costs at overseas posts. The January 1999 report of the Accountability Review Boards concluded that as the United States works to upgrade the physical security of U.S. missions, it should also consider reducing the costs and number of embassies through the use of modern technology and regional operations. To begin implementing this recommendation, a State-appointed panel reviewed the overseas operations of the U.S. government and concluded that the U.S. presence has not adequately adjusted to the new economic, political, and technological landscape. In November 1999, the panel recommended that the President establish an interagency committee to determine the right size and composition of overseas posts. In March 2000, State announced that a committee had been formed to look at how to implement right-sizing and to conduct pilot programs at selected posts. According to State officials, the Department has prepared a draft report on the results of the pilot programs that may help the Department determine the size of and other requirements for new embassies and consulates. Results of the studies by the Office of Foreign Buildings Operations, as well as efforts to right-size embassies and consulates, could provide valuable inputs to preparation of a long-term capital plan. Although not all the studies and efforts have been completed, the studies’ preliminary results could be used by State to develop an initial capital plan, with modifications after additional study results become available. State’s large-scale embassy and consulate construction effort is underway, and State is making progress on most of its initial priority projects. Sustained funding will be needed for State to make substantial progress in replacing its vulnerable embassies and consulates, and State must work effectively with the Congress in charting the future course, priorities, and funding levels for the program. State has asked for advance appropriations through fiscal year 2005 for the program but has not developed a detailed capital construction plan detailing how these funds would be used that would provide a sound foundation for moving this important and costly program forward. Long-term capital plans have been used by leading organizations to effectively establish project priorities, plan for resource use, control costs, and provide decisionmakers a rationale for allocating funding. A long-term capital construction plan will strengthen State’s ability to support and sustain its funding needs, encourage dialogue with congressional committees, and promote consensus by decisionmakers in the executive and legislative branches on funding levels and expectations for program progress. A long-term plan would also improve accountability and transparency (openness) over State and congressional decision-making for a program that is likely to be in the forefront of the U.S. government’s foreign affairs agenda for many years. To enhance management and decision-making regarding the replacement of embassies and consulates that are vulnerable to terrorist attack, we recommend that the Secretary of State prepare and present to the Congress a long-term capital construction plan that identifies proposed construction projects and their estimated costs and when the Department plans to start and complete site acquisition, design, and construction. This plan should cover at least 5 years and be updated annually. It should be modified periodically as funding decisions are made and cost estimates and building schedules are revised, as well as to adjust to key management factors that could potentially influence program implementation, such as program staffing and private industry supply capacity and other significant factors that may affect construction requirements and priorities, including future decisions concerning right-sizing of overseas posts. Recognizing that precise estimates cannot be easily made in the later years, we nevertheless believe that State’s plan should include notional estimates of the overall program cost and duration, including estimated annual funding requirements over the life of the program. The State Department indicated that it does not plan to implement our recommendation to prepare, and does not see the merits of, a long-term capital plan for its multiyear, multibillion-dollar program to replace embassies and consulates. In view of the State Department’s position, the Congress may wish to consider requiring that State prepare such a plan, consistent with our recommendation, to assist the Congress in considering State’s requests for program authorizations and appropriations and for conducting program oversight. In commenting on a draft of this report, the State Department disagreed with the report’s conclusions and recommendation regarding long-term planning. State indicated that it had already established a long-term capital plan based on its ranking of facilities into bands of priority. It said that these priority bands, combined with the information it provides the Congress on projects to be executed in the current fiscal year and the semiannual reports on these projects, constituted a sound approach to program decision-making and accountability. State said that development of a capital plan along the lines that we recommend would be of no value because it would be prone to guesswork, would be impractical given uncertain future funding levels and project costs, and would be resource intensive. State also emphasized that sustained long-term funding was needed for its program, and criticized our report for not adequately addressing this need and the interrelationships between program planning and funding. State’s comments are reprinted in appendix V. State also provided technical comments that we incorporated in the report where appropriate. Despite the best practices of leading organizations and the need to work with the Congress to determine funding needs, State’s comments reflect a view that no change is needed in its approach to planning and implementing a multibillion-dollar overseas construction program. State believes it should wait until it knows how much funding it is likely to receive before preparing a long-term plan consistent with our recommendation. In contrast, we believe that State should prepare a long-term plan with project cost estimates and schedules for at least a 5-year period, to assist decisionmakers in deciding program scope and funding. We also believe that the information that State has provided the Congress is not sufficient to guide judgments regarding long-term program funding and direction because it does not clearly indicate the order that projects will be done, their estimated costs, or when the projects will be completed. Our report noted that leading organizations use long-term capital plans to define capital asset decisions, promote informed choices about resource needs, and provide decisionmakers with an overall sense of projects’ merits and funding needs. Information on the scope and composition of the overall program envisioned by State would encourage dialogue with congressional committees regarding funding levels and program expectations over the life of this long-term program. Without a long-term capital plan that includes such information, it will be difficult for State and congressional decisionmakers to accurately judge how much the program will cost; when it can be reasonably expected to be completed; and how key factors, such as funding and changes in the size of overseas posts, may affect program implementation. A long-term capital plan would also improve the accountability and transparency of decisions made by State, other agencies, and the Congress that affect this important program. State’s comments indicate that it has interpreted our recommendation as requiring the development of detailed plans that accurately predict the exact space requirements, precise cost, and construction schedule of each of more than 180 projects over the life of this program. This was not our intention. Our recommendation was intended to provide decisionmakers with better information on the potential long-term costs and schedules for this effort to enable them to weigh the merits of individual projects and make related funding decisions. We believe it is reasonable for State to prepare a detailed plan over a 5-year or longer period. As noted in our report, leading organizations not only rank projects by priority, but they also provide more detailed cost estimates and other information in plans covering 5-, 6-, or 10-year periods. State already has the foundation to adopt this best practice. We share State’s view that precision in planning estimates becomes less practical and important for the later years of the overall program. However, we believe that it is reasonable to expect notional estimates for the program’s later years so that decisionmakers have better information on the overall cost and duration of the program. We have modified our recommendation accordingly. To determine the status of the 10 priority embassy and consulate projects, we met with project managers in State’s Office of Foreign Buildings Operations and officials that oversee the work of the Office. In addition to obtaining overall information on the status of the construction effort, such as State’s quarterly internal reports on its progress in implementing the emergency security supplemental program, we obtained detailed information on the history and status of the 10 priority projects for replacement identified by the Department shortly after the bombings of the two embassies in Africa. Those priority posts were Kampala, Uganda; Doha, Qatar; Tunis, Tunisia; Dar es Salaam, Tanzania; Nairobi, Kenya; Istanbul, Turkey; Zagreb, Croatia; São Paulo and Rio de Janeiro, Brazil; and Luanda, Angola. We also met with a representative of J.A. Jones Construction Co., which is the contractor responsible for building the new embassy complexes in Nairobi, Dar es Salaam, and Zagreb. Issues discussed included cost and implementation challenges facing those projects as well as potential options for reducing the time to construct new embassies. To assess State’s plans for the overall program, we met with senior State officials to discuss their vision for the program, their method for establishing project priorities, and their approach to requesting funding. We also examined State’s requests for appropriations in fiscal years 1999-2001 and the supporting material. We received briefings on State’s ongoing studies to identify program requirements, alternatives, and obstacles. We also identified leading best practices in capital planning decision-making that could be applied to State’s construction program. To identify steps State is taking to improve the management of the Office of Foreign Buildings Operations and the efficiency of its construction processes, we received briefings from the Office on its initiatives and plans. We conducted our review from March through November 2000 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of State and interested congressional committees. We will make copies available to others upon request. Please contact me at (202) 512-4128 if you or your staff have any questions about this report. An other GAO contact and staff acknowledgments are listed in appendix VI. The Department of State has received about $1.1 billion for security construction since the bombings in Africa. As of September 30, 2000, State estimates that it had obligated over half of the $604 million it had received for fiscal years 1999 and 2000. In December 2000, State received its fiscal year 2001 appropriation of about $515 million for the program. State also requested $3.35 billion in advanced appropriations to continue the program through fiscal year 2005, but this request was rejected. This appendix describes in more detail the funds requested and received, as well as the planned uses of the funds. Of the $604 million State received in fiscal years 1999 and 2000 for its embassy security construction program, State estimates it obligated about $341 million through the end of fiscal year 2000, mostly for large-scale construction contracts, as well as to acquire sites and procure design and other immediate goods and services. Table 2 provides appropriation, obligation, and expenditure data for State’s embassy and consulate replacement program. Included in State’s 1999 emergency security supplemental funding was about $119 million to build new embassy compounds in Nairobi, Kenya, and Dar es Salaam, Tanzania. State also allocated $185 million of its emergency supplemental appropriations to begin replacing facilities at other posts. Table 3 provides State’s planned uses for most of these funds. In May 1999, State notified the Congress that it intended to also use part of the funds to pursue the acquisition of construction sites for embassies and consulates in 26 other locations. State requested and received $300 million in fiscal year 2000 funds for security construction projects. Specific planned uses of the funds cited by State in March 2000 included the projects listed in table 4. State also identified 13 posts for potential site acquisition using fiscal year 2000 funds. These posts are Asmara, Eritrea; Conakry, Guinea; Dakar, Senegal; Harare, Zimbabwe; Peshawar, Pakistan; Phnom Penh, Cambodia; Tbilisi, Georgia; Yerevan, Armenia; Antananarivo, Madagascar; Bamako, Mali; Bujumbura, Burundi; Karachi, Pakistan; and Sarajevo, Bosnia- Herzegovina. State requested $500 million for fiscal year 2001 to continue replacing its highest risk facilities. Projects identified by State as having priority for construction were Cape Town, South Africa; Damascus, Syria; Rio de Janeiro and São Paulo, Brazil; Sofia, Bulgaria; and Yerevan, Armenia. In addition to these projects for which site acquisition and/or design was to be funded in previous years, State planned to use the funds to acquire five to eight additional sites for which construction funding would be sought in subsequent years. State’s request for fiscal year 2001 included $50 million to construct separate, on-compound U.S. Agency for International Development (USAID) facilities in Kampala, Uganda, and Nairobi, Kenya. State also requested an additional advance appropriation of $3.35 billion to continue the facility replacement program through fiscal year 2005. However, it did not indicate the projects it planned to construct with this advance appropriation or provide estimates of project costs and construction schedules. The House Committee on Appropriations recommended full funding of State’s $500 million request for security construction but did not recommend approval of the request for an advance appropriation in its June 19, 2000, report to the full House. The Committee also did not approve the use of $50 million to construct USAID facilities, explaining that funding requirements for USAID would first have to be considered by another Subcommittee with jurisdiction. In its September 2000 report, the Senate Appropriations Committee also rejected the request for an advance appropriation and recommended funding for fiscal year 2001 totaling $271.6 million for replacing embassies and consulates (planning, site acquisition, design, and construction), which is about $228 million less than State had requested. The Committee expressed concerns that State was accumulating large, unfunded construction requirements. The Committee report recommended that the Congress limit the number of new construction starts and, where possible, only fully fund ongoing projects to prevent the unfunded requirements from growing. Projects recommended for construction funding in the report were Sofia, Bulgaria ($78.3 million); Yerevan, Armenia ($64 million); Damascus, Syria ($69.7 million); and Abidjan, Côte d’Ivoire ($6.2 million). The Conference Report on State’s Fiscal Year 2001 appropriations, issued in October 2000, provided $515 million for the program (H.Rept. 106-1005). Legislation enacting this provision was passed in December 2000. This appendix provides an overview of the 10 embassy and consulate projects State gave priority for replacement shortly after the 1998 bombings in Africa. These projects are in Kampala, Uganda; Doha, Qatar; Tunis, Tunisia; Dar es Salaam, Tanzania; Nairobi, Kenya; Istanbul, Turkey; Zagreb, Croatia; São Paulo, Brazil; Rio de Janeiro, Brazil; and Luanda, Angola. The overview presents a description of the facilities’ status and history, as well as tables showing construction cost estimates and pictures of the current and planned facilities, where available. The existing embassy in Kampala is located on a major street whose activities cannot be controlled, and it lacks adequate setback on three sides. Unclassified functions are in an annex, which also is not secure. State had initiated the design of a new embassy before the August 1998 embassy bombings in Africa. In fact, State had planned to build a new facility as part of the Diplomatic Security Construction (Inman) program of the late 1980s/early 1990s. The project did not proceed at that time largely due to problems in obtaining title to a site. In the aftermath of the 1998 bombings, relocation of the embassy became a high priority. In December 1998, State issued a solicitation seeking qualified firms to bid on the building project. State prequalified three contractors to bid and in September 1999 awarded a design/build contract to Washington Group International, formerly Morrison Knudsen Corporation. (See table 5 for the project cost estimates, fig. 2 for the existing embassy, and fig. 3 for the planned new embassy.) The master plan for the new embassy compound includes a USAID annex, but its construction has not started. Building costs for the annex are estimated at $19.5 million. Fiscal year 1999 and 2000 funds are being used for the design and preconstruction site improvements of the USAID facility. Fiscal year 2001 State funds will be used for the construction of the new USAID facility, assuming congressional approval. Plans to build a new embassy in Doha began with State’s diplomatic security construction program that started in the late 1980s. Funding allocated for the new embassy totaled nearly $19 million under that program, but the project did not progress beyond the design stage because of high estimated costs and other factors. The current project is an operating lease and construction agreement for an unfinished villa that is locally owned. The Qatari owners will finish construction of the facility based on State’s floor plan and other specifications. The Qatari landlord hired the design and building contractors, both uncleared local firms. All work is being coordinated on site by State personnel, and certain secure areas of the facility will be completed by cleared American contractors after the building is turned over to State, scheduled for late 2000. A contract for construction of a limited, controlled access area by cleared Americans has been awarded, and most of the orders for furniture and furnishings have been issued. Construction began in August 1999, and occupancy is scheduled for April 2001. The estimated project cost of $22.5 million (see table 6) includes the cost of a temporary embassy (about $3 million) and the first 2 years of lease payments (about $1.2 million per year). The initial lease is for 6 years but can be renewed for five additional 6-year periods. Over 36 years, the total cost will be more than $60 million: $22.5 million for the project plus more than $40 million in lease costs. See figures 4 and 5 for the embassy prior to the attacks in Africa and the planned embassy in Doha. Efforts to construct a new, fully secure embassy compound in Tunisia began with the Inman program initiated in the late 1980s. A 21-acre site was purchased in 1992 but, due to funding priorities and other reasons, the project did not enter the construction phase. After the 1998 bombings in Africa, this project again became a high priority. Facility design by Tai Soo Kim Partners began in September 1999. State also prequalified five construction firms as potential builders of the new facility. Bids for construction were solicited in August 2000. (See table 7 for estimated project costs.) A construction contract was awarded to Bill Harbert International Construction Company in September 2000. The embassy compound will be a campus-style complex and include a classified chancery as well as separate general services, marine security guard, and warehouse buildings. Figure 6 shows the existing embassy, and figure 7 depicts the planned new embassy. Following the August 1998 terrorist bombing that destroyed the embassy in Dar es Salaam, State started the process to relocate the embassy. To temporarily restore embassy operations, State converted a residential compound to function as the interim embassy, which opened in February 1999 at a cost of approximately $12.3 million. Concurrent to that activity, State issued a solicitation in November 1998 seeking qualified firms to bid on the design and construction of a new permanent embassy. Three design/build contractors were prequalified and allowed to compete. However, in May 1999, the project was placed on hold because the purchase of the proposed site fell through, and State had to look for another site. Once State identified a second site and was reasonably certain that the acquisition would go through, the competition resumed. State awarded a design/build contract to J.A. Jones Construction Co. in September 1999. The contractor began the design of the new embassy in October 1999. Initial groundwork at the 21-acre site began in August 2000, and occupancy is expected in November 2002. (See table 8 for estimated project costs.) Figure 8 shows the embassy before the 1998 bombing, and figure 9 shows the design for the new embassy. All other U.S. agencies are scheduled to be in the compound, including USAID, whose facility will be constructed as a separate, unclassified facility. According to USAID data, $15 million was available in its fiscal year 2000 budget for construction. Following the August 1998 terrorist bombing that destroyed the embassy in Nairobi, the embassy temporarily moved into the offices of USAID. Subsequently, State searched for an office building to renovate and use as an interim embassy. Renovation of the building started in January 1999, and the interim embassy became fully operational in August 1999, at a cost of about $21.7 million. Concurrently, State issued, in November 1998, a solicitation seeking qualified design/build firms to bid on the design and construction of a new, permanent embassy compound. Three contractors were prequalified and allowed to compete. State awarded a design/build contract to J.A. Jones Construction Co. in September 1999. (See table 9 for estimated project costs.) The contractor started the design of the new embassy in October 1999. Initial groundwork at the 16-acre site began in August 2000, and occupancy is expected in March 2003. Figure 10 shows the previous embassy in Nairobi, and figure 11 depicts the new embassy. All other U.S. agencies are scheduled to be in the compound, including USAID, whose facility will be constructed as a separate unclassified facility. Additional building costs for USAID’s facility are estimated at $36.1 million. Fiscal year 1999 State funds are being used for the design and preconstruction preparation of the new USAID facility. Fiscal year 2001 funds were requested to fund the facility’s construction. The existing consulate office building is 125 years old and is built on unreinforced masonry construction, which makes it unstable in case of earthquakes. It also has insufficient setback and is very vulnerable to attack because narrow and busy urban streets bound the property on three sides. Planning for a new facility began as part of the Inman program; funding allocated for the facility totaled $34.9 million as of November 1990. This Inman project did not proceed largely because of difficulties encountered at that time in acquiring a suitable site. In the aftermath of the August 1998 embassy bombings in Africa, the relocation of the consulate again became a high priority. State awarded the office building concept design to Zimmer Gunsul Frasca Partnership. (See table 10 for estimated project costs.) The building site was under purchase contract as of August 2000. Occupancy is expected in April 2003. For construction of the consulate, State issued a solicitation in December 1999 seeking qualified construction firms to bid on the project. Five contractors were prequalified to bid, and State awarded a construction contract in September 2000 to Caddell Construction Company. See figure 12 for the existing consulate and figure 13 for the design of the new consulate. The planned new embassy in Zagreb will replace the existing facility located on the corners of two very busy streets in the center of the city. All U.S. mission elements in Zagreb will be consolidated into the new embassy. State started the process for replacing the existing facility in January 1999. Because the selected site had multiple parcels with different owners, the acquisition negotiations were prolonged, lasting until March 2000. Concurrent with this activity, State issued a solicitation in March 1999 seeking qualified firms to bid on the design and construction of a new office building. Three contractors were prequalified and allowed to compete. State awarded a design/build contract to J.A. Jones Construction Co. in September 1999. (See table 11 for estimated project costs.) Occupancy is expected in May 2003. Figure 14 shows the present embassy, and figure 15 depicts the planned new embassy. The existing consular facility in São Paulo is considered highly vulnerable to terrorist attack because of the lack of setback and other undesirable security characteristics. The consulate consists of floors 1 through 5 of a 14-floor commercial office building, and several other agencies are located in even more vulnerable space at separate locations. A new consular site costing $19 million has been located, and site negotiations were complete as of August 2000 (see table 12 for estimated project costs). State prepared to demolish existing buildings on the site. Congressional committees approved site acquisition, but the Senate Appropriations Committee is encouraging State to make use of existing buildings on the site. State does not believe that it would be feasible to bring the existing building design up to security requirements and is working with the Committee to discuss options. Figure 16 shows the current consulate in São Paulo; the new consulate has not yet been designed. Similar to the situation in São Paulo, the existing Rio de Janeiro consular facility is considered highly vulnerable to terrorist attack because of the lack of setback and other undesirable security characteristics. The 13-story consulate office building had served as the U.S. embassy until Brazil’s capital was moved to Brasilia in the 1960s. The consulate is in a crowded, high-crime area of the city. Progress on the project has been slow due to a number of factors, including difficulties in finding suitable sites. State officials eventually identified a potential site and negotiated a price (see table 13 for estimated project costs). As of August 2000, State documents indicated that the Senate Appropriations Committee had not approved the acquisition of this site. According to State officials, the Committee cited concerns that State had not sufficiently considered options for reducing the size of the post by regionalizing its operations in Brazil. (In addition to its existing embassy facility in Brasilia, State was proposing to spend $200 million to build two consulate facilities in Brazil—one in Rio de Janeiro and the other in São Paulo.) According to State, the purchase contract on the Rio de Janeiro site has expired, and State is no longer pursuing its purchase. Figure 17 shows the existing consulate in Rio de Janeiro; a design for the new consulate is not available. On July 25, 2000, State notified the Congress of its intent to reprogram $22.8 million of the funds appropriated for Rio de Janeiro to meet part of the requirements for a new embassy in Abu Dhabi, United Arab Emirates. The Congress approved the reprogramming and in September 2000 State signed a design/build contract for a new embassy in Abu Dhabi. August 2000 State documents indicate that the Department is developing a revised staffing pattern for the Rio de Janeiro facility, that State officials have visited the post to review additional potential sites, and that State is evaluating alternative uses of the existing consulate site. Luandan embassy functions are housed in prefabricated buildings and trailers, some of which are virtually on the perimeter wall. The lack of setback from the streets on three sides and the temporary nature of the facility make its occupants unusually vulnerable to attack. Other functions are housed outside the embassy above an auto repair shop, which provides no perimeter protection and also gives no protection from violence or terrorist actions. State has proposed to construct the embassy on the present site, even though it would have a setback of 65 feet and therefore would not meet the 100-foot minimum security standards for setback. State indicated that alternative properties it has identified that would permit the required setback were not in desirable locations or did not have secure titles because of land ownership uncertainties. The very small size of the existing site dictated a design for a compact, multistory building located in the center of the compound, completed in phases to minimize disruptions to embassy operations. (See table 14 for estimated project costs.) All other U.S. agencies, including USAID, would be co-located in the new embassy. The Secretary of State granted an exception to the setback policy based on State’s plans to achieve blast resistance through other means, such as thicker walls and windows. According to State, the blast effects on the proposed new embassy would be mitigated by strengthened construction methods and techniques, which would provide performance equivalent to that of a standard blast-resistant building with a 100-foot setback. According to State officials, the project design is complete, but the House and Senate Appropriations Committees have not approved the use of fiscal year 2000 funding for construction. The House Committee believed that all new construction should result in buildings that fully comply with State’s own standards, and therefore it rejected State’s plan to spend $39.2 million in appropriations on a facility with insufficient setback. The House Committee reconsidered its position, granting State approval to move ahead if it uses the proceeds from the sale of other properties to finance the project. As of September 2000, the Senate Committee had not approved the project. If that Committee does not approve the current plan, State plans to reprogram funds to other projects until it can find another site in Luanda and prepare a new facility design. In November 2000, State officials said that the Congress had not agreed to provide security appropriations for this project. Figure 18 shows the existing embassy, and figure 19 depicts the design for the new facility. The estimated costs for the 10 initial priority new office buildings vary between $22 million to retrofit a leased facility in Doha to about $100 million for constructing a government-owned facility in Sao Paulo. State officials believe that in some other locations, costs to replace existing facilities could exceed $200 million. State estimates that constructing a new embassy costs roughly three times as much as it would cost to build a commercial office building of similar size in the United States. Several factors contribute to the additional expenses of constructing a new embassy compared to the costs of constructing a typical commercial building. For example, the basic structure for a typical commercial building uses steel columns and one-way steel beams, steel decks/light-weight concrete floors, and steel joisted roofs and a steel deck. In comparison, a new embassy is typically constructed almost entirely of reinforced concrete, thicker roof and floor slabs, and other elements to meet State’s blast standards. Other key cost factors include substantially higher design costs partially due to unique perimeter access control and communications requirements; specialty construction for communications and access control; use of American contractors overseas; perimeter walls; material shipping and transit security; shielding against electronic surveillance; substantial construction supervision and site security costs; and unique designs and tailoring of each building to the requirements of each co-located agency. Seismic concerns, and the costs of large sites in urban areas, further contribute to the high costs of new embassies and consulates. Another fundamental issue affecting costs is the special difficulties often associated with performing construction in developing countries. These difficulties involve obtaining host country permits in a timely fashion, ensuring work quality, planning for worker illness and disease, adjusting to cultural differences, and working out technical communication difficulties. This appendix briefly describes State’s key initiatives to improve management and project delivery processes at the Office of Foreign Buildings Operations. Management Processes A key recommendation of State’s November 1999 overseas presence panel dealt with the management capacity of State’s Office of Foreign Buildings Operations. The panel recommended that a new, federally chartered government corporation be established to replace that Office. Such a corporation would exercise responsibility for building, renovating, maintaining, and managing the federal government’s overseas civilian facilities. The issues that led to the panel’s recommendation included the perception that projects managed by the Office of Foreign Buildings Operations took longer and cost more than comparable private sector projects, that time lines were not always met, and that staffing levels appeared too high for the number of projects managed. State has not agreed with the recommendation in the belief that the staff work leading to the recommendation was faulty and did not give due consideration to security requirements and special overseas needs. State noted, however, that it has established special study teams that are giving serious attention to related panel proposals. Special teams are studying the critical business practices and other issues affecting the performance of State’s Office of Foreign Buildings Operations. These include business process reengineering, with the objective of optimizing current processes and identifying and resolving overlaps, gaps, inefficiencies, and non-value-added activities; capital funding issues, with the goal of developing alternative sources of financing to supplement congressional appropriations, such as charging capital rent, making asset sales, and seeking federal loans; organizational structure, with the goal of providing an assessment of the benefits and consequences of becoming a performance-based organization; communications strategy, with the goal of better communicating internally and externally and addressing what the Office perceives to be a lack of confidence in its work by overseas posts, headquarters agencies, and the Congress; and customer focus, with a goal of developing a strategy to better meet State and external needs. These studies and related efforts are scheduled to be completed after 2001. Actions that may be taken on the study results will depend on several factors, including “doability”, other agency participation, and costs. Design/Build Contracting: State is using this contracting method, which involves providing design and construction services under a single, lump-sum contract. This is being used for projects in Nairobi, Kenya; Dar es Salaam, Tanzania; Zagreb, Croatia; and Kampala, Uganda. State’s objective is to receive faster product delivery through concurrent design and construction activities. Generally, it is anticipated that total project implementation time can be reduced as much as 6 months using this contracting method. Construction industry experience also indicates that project costs can be reduced slightly by this form of contracting. However, deterrents to greater use of the approach are the reductions in time available for project definition, design, development and review, and overall quality control. Fast-tracking: State is implementing its first fact-track project in Kampala, Uganda. Essentially, fast-tracking involves using innovative scheduling to speed up delivery of the completed facility. The post is the first one set to move into a secure replacement facility since the bombings in Africa. State officials acknowledge that the compressed schedule has added costs to the project, but said that its subsequent development of an “evaluated total cost method” for determining a contract’s best value award, validated its contract award decision for the Kampala project. Site-adapted Office Building: State has initially identified projects in the Africa region for potential use of this concept, which involves a single building design that can be used at a number of posts with similar functions, staffing, and tenant agency complement. The base building would be modified to respond to unique site conditions and local culture, and the concept is expected to save design time and costs by reusing design documentation. Posts identified as potential candidates include Addis Ababa, Ethiopia; Bamako, Mali; Bujumbura, Burundi; Kigali, Rwanda; Yaounde, Cameroon; Nouakchott, Mauritania; Antananarivo, Madagascar; Asmara, Eritrea; Conakry, Guinea; Dakar, Senegal; Lomé, Togo; Maputo, Mozambique; and Harare, Zimbabwe. Standard Delivery Approach: State reported that it has developed a standard delivery approach based on standardizing elements of project designs. Benefits expected include replicating project design elements on multiple projects. Posts identified by State for potential application of this initiative included Abidjan, Côte d’Ivoire; Damascus, Syria; Tashkent, Uzbekistan; and Yerevan, Armenia. The following are GAO’s comments on the Department of State’s letter dated December 19, 2000. 1. The basic premise of our report is that State can do a better job of planning its capital construction program to help decisionmakers make more informed decisions about the program in the long term. Our report acknowledges that State has ranked more than 180 projects for potential replacement. However, as noted in our report, leading organizations not only rank capital projects by priority, but they also provide more detailed cost estimates and other information in plans covering 5-, 6-, or 10-year periods. State has the foundation to adopt this best practice, which we believe would be useful to the Department, other agencies operating overseas, and the Congress for decision- making and other purposes. 2. We believe that the planning information that State has provided the Congress is not sufficient to guide judgments regarding long-term program funding, accountability, and direction. Our report noted that leading organizations use long-term capital plans to define capital asset decisions, promote informed choices about resource needs, and provide decisionmakers with an overall sense of projects’ merits and funding needs. Information on the scope and composition of the program envisioned by State, including the estimated cost and schedule of planned projects, would encourage dialogue with congressional committees regarding funding levels and program expectations over the life of this program. Without a long-term capital plan that includes such information, it will be difficult for State and congressional decisionmakers to accurately judge how much the program will cost; when it can be reasonably expected to be completed; and how key factors, such as funding and changes in the size of overseas posts, may affect program implementation. A long-term capital plan would also improve the accountability and transparency of decisions made by State, other agencies, and the Congress that affect this important program. 3. We disagree with State’s view that implementing our recommendation would involve too much guesswork because of uncertainties regarding funding and future projects’ exact scope and estimated costs. A long-term plan is typically based on assumptions about the future and estimates that are imprecise and subject to change. Once prepared, a plan can be adjusted to reflect changes and refinement of estimates in projects’ scope, cost, and implementation schedule, and to adjust to funding decisions and other factors. State has many years of experience in estimating construction schedules and costs and has already begun several studies that would provide valuable inputs to such a plan. 4. We agree that sustained funding will be needed for State to make substantial progress in replacing its vulnerable embassies and consulates, and we have modified the report’s conclusion to recognize this. However, we do not agree with State’s view that it should wait until it knows how much funding it is likely to receive before preparing a long-term plan consistent with our recommendation. Our report title reflects the message of the report that better planning will enhance program decision-making. Regarding funding, our report describes State’s requests for funding and the appropriations it has received in detail, including State’s request for $3.35 billion in advance appropriations for fiscal years 2002-05 that was rejected. 5. In appendix I, we describe the concerns expressed by the Senate Appropriations Committee in September 2000 that State was accumulating large, unfunded construction requirements. We also describe the Committee’s recommendations that the Congress limit the number of new construction starts and provide substantially less funding for the program than what State requested. 6. In its September 2000 report, the Senate Appropriations Committee did not say that it did not intend to fund long-term planning efforts. Rather, the report expressed the view that land acquisition, site preparation, and building design are relatively inexpensive, allowing the Department to pursue a large number of projects at a very modest up front cost, and that this had led to the accumulation of large, unfunded construction requirements. 7. State’s comments indicate that it has interpreted our recommendation as requiring the development of detailed plans that accurately predict the exact space requirements, precise cost, and construction schedule of each of more than 180 projects over the life of this program. This was not our intention. We have modified our recommendation, calling for State to prepare a long-term plan that covers at least 5 years and to provide notional estimates of the overall program’s cost and duration. In addition to the contact named above, Lynn Moore, Jesus Martinez, and Rona Mendelsohn made key contributions to this report. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system)
The State Department has determined that about 80 percent of overseas U.S. diplomatic facilities lack adequate security and may be vulnerable to terrorist attack. In September 1998, State expanded its capital construction program to accelerate replacing its most vulnerable embassies and consulates by acquiring sites and preparing plans at 10 priority locations. This report summarizes (1) the status of the 10 priority embassy and consulate construction projects and (2) State's plans for the overall construction program. As of November 2000, seven projects are in the construction phase. The remaining three projects are on hold pending agreement between State and Congress about the Department's construction proposals. Although State envisions a long-term, multi-billion dollar program and has ranked more than 180 facilities it may need to replace, it has not prepared a long-term capital construction plan that identifies (1) proposed construction project's cost estimates and schedules and (2) estimated annual funding requirements for the overall program.
BRAC, overseas rebasing, and Army modularity are all expected to generate significant personnel movements among numerous bases within the United States and from certain overseas locations back to the United States. Four primary organizations—the Army’s Office of the Assistant Chief of Staff for Installation Management, the Army Corps of Engineers, OEA, and the President’s Economic Adjustment Committee—are responsible for planning, managing construction, and assisting local communities affected by these moves. First, DOD has undergone four BRAC rounds since 1988 and is implementing its fifth round, known as BRAC 2005, which was authorized by Congress in the National Defense Authorization Act for Fiscal Year 2002. The BRAC Commission recommendations were accepted by the President and Congress and became effective on November 9, 2005. In accordance with BRAC statutory authority, DOD must complete closures and realignments by September 15, 2011. BRAC 2005’s key goals were to (1) transform DOD by more closely aligning its infrastructure with defense strategy, (2) enhance joint operations, and (3) reduce excess infrastructure and produce savings. Traditionally, DOD relied on BRAC primarily to reduce excess property and save money since property that has been disposed of is no longer maintained by DOD. Conversely, due in part to the addition of the transformation and joint operations goals to BRAC 2005, this round led to more than twice the number of actions in all previous rounds combined, 837 distinct actions in all. These BRAC actions incorporate many of the more than 50,000 Army personnel expected to return from overseas locations to the United States as part of DOD’s overseas rebasing initiative discussed below. Second, in August 2004, the President announced plans for sweeping changes to the number and locations of DOD’s overseas-based facilities. Known as the Global Defense Posture Realignment, DOD plans to realign its overseas basing structure over a 6- to 8-year period from the legacy Cold War posture to one that would more effectively support current allies and strategies and address emerging threats. Under the overseas rebasing effort, the 50,000 Army personnel plus another 20,000 other defense personnel and about 100,000 family members are to relocate from overseas locations—primarily in Europe and Korea—to bases in the United States. Although some of these personnel have already relocated, many were still overseas at the time of our review. Army plans call for overseas relocations to the United States to be completed prior to September 15, 2011. Third, similar to BRAC and overseas rebasing actions, implementation of Army force modularity will add to the personnel growth at some bases. The Army’s modular transformation has been referred to as the largest Army reorganization in 50 years and affects the active Army, Army National Guard, and U.S. Army Reserve. The foundation for the modular force is the creation of brigade combat teams that while somewhat smaller than existing brigades, are expected to be more agile and deployable and better able to meet combatant commander requirements. Successful implementation of this initiative requires the movement of personnel across various units, new facilities and equipment, a different mix of skills and occupational specialties, and significant changes in doctrine and training. The Army began the modularity initiative in 2004 and expects to finish most associated reorganizations by 2011, but expects that some reorganizations will occur after 2011. As a result of the three initiatives and certain other restationing moves, the Army expects a net gain of about 154,000 personnel at its domestic gaining bases from fiscal years 2006 through fiscal year 2011. These gains include active and reserve soldiers, military students and trainees, civilians, and mission contractors but do not include family members and non-mission- related contractors. Our analysis of March 2007 Army data on personnel restationing actions indicates that 18 domestic installations, as shown in table 1, are likely to experience a net gain of at least 2,000 military and civilian personnel for fiscal years 2006 through 2011 because of BRAC 2005, overseas rebasing, modularity, and other miscellaneous restationing actions. Personnel gains at individual locations are projected to range from 7 percent to 111 percent. As also shown in table 1, while the overall net gain in personnel at these installations averages 31 percent, Forts Belvoir, Bliss, Lee, and Riley are expected to experience a 53 percent or more growth rate. The expected personnel net gains at these 18 installations account for nearly 90 percent of the total expected personnel net gains across all Army domestic installations through 2011. Figure 1 shows the locations of the 18 installations. Accommodating the expected large increase of personnel at these 18 Army locations over the next several years requires the expenditure of significant military construction funds for required facilities. Although the Army also will have procurement, operations and maintenance, and other associated cost increases at these bases because of the personnel increases, the scope of this report focuses on military construction funding. Our analysis of DOD data, as shown in table 2, indicates that DOD is planning to spend over $17 billion to construct facilities at these locations through the fiscal year 2011 time frame. Also shown in table 2, the overwhelming majority, over 80 percent, of the planned construction expenditures at these Army installations are attributable to BRAC, overseas rebasing, and modularity actions. Moreover, as shown in the table, other military services or defense agencies and activities are planning to expend about $4.6 billion for constructing BRAC facilities they expect to use at these Army installations. For example, several defense agencies are expecting to spend more than $2.6 billion in facility construction at Fort Belvoir, Virginia, while the Air Force plans to spend in excess of $600 million for facilities to house medical training personnel and students at Fort Sam Houston, Texas. The following four organizations are to manage personnel moves associated with BRAC, overseas rebasing, or Army modularity and to assist local communities affected by the movements: The Army’s Office of the Assistant Chief of Staff for Installation Management provides policy guidance and program management on all matters relating to the management and funding of Army installations worldwide and to ensure the availability of installation services and facilities. To accomplish its mission, the office coordinates with other key Army headquarters organizations, including the Office of the Deputy Chief of Staff for Operations and Plans and the Army Budget Office, to respond to operational requirements and resource availability in providing for installation infrastructure. To assist in this role, the Installation Management Command provides needed installation services and facilities, including construction, family care, food management, environmental programs, well-being, logistics, public works, and installation funding to support readiness and mission execution. The Army Corps of Engineers is the Army’s construction agent and is charged with contracting for infrastructure construction for the Army. The Corps also manages the construction process, including supervision and inspection as facilities construction progresses. It also functions as the construction agent for selected Air Force construction projects and fulfills a role as an agent for a civil works construction program involving flood control, water supply, hydroelectric power generation, navigation, recreation, wetlands regulation, and resource protection. OEA is a field activity within the Office of the Secretary of Defense that assists states and communities by providing technical and financial assistance in planning and carrying out adjustment strategies in response to defense actions. Much of that assistance in the past had been directed toward communities that lost military and civilian personnel because of the closure or major realignment of a base. Conversely, because the 2005 BRAC round, overseas rebasing, and Army modularity have created significant growth at many bases, OEA has assisted affected communities with growth planning. The President’s Economic Adjustment Committee was established under Executive Order 12788 and comprises 22 federal agencies that are to facilitate the organization, planning, and execution of community-based defense adjustment strategies. The Deputy Under Secretary of Defense (Installations and Environment) chairs the committee, and the Secretaries of Labor and Commerce serve as Vice Chairmen. The Committee Chair has testified that the committee will likely conduct team visits to better understand local community adjustment challenges and to more capably address potential needs for federal assistance. The Army has developed plans to accommodate growth of about 154,000 personnel at its domestic bases as a result of BRAC 2005, overseas rebasing, and force modularity actions, but it faces several complex challenges to the implementation of those plans and risks late provision of needed infrastructure to adequately support arriving personnel. First, Army plans are still evolving, and officials at the gaining bases we visited did not agree with Army headquarters on personnel movements at their bases. Second, the synchronization of personnel movements across installations with the planned infrastructure construction is difficult because any unforeseen delays or disruptions in providing for necessary facilities can adversely affect synchronization plans. Third, competing resource demands could lead to redirection of resources that would have been used for infrastructure improvements to other priorities, as has happened in the past. Fourth, the Army Corps of Engineers may be at risk of not finishing all needed infrastructure projects within new cost and timeline goals because of the unprecedented volume of required construction. Expected personnel movement numbers differ between Army headquarters and the bases where these people will move, thus affecting whether adequate infrastructure will be in place when personnel arrive. As of March 2007, the nine gaining bases we visited were expecting different numbers of personnel arrivals and departures than those generated by the Office of the Deputy Chief of Staff for Operations and Plans. Table 3 provides examples of these variances at six of these bases, five of which are planning for more personnel movement than Army headquarters’ plans while one base expects slightly less. While the other three bases we visited had personnel movement numbers that also differed from the Deputy Chief of Staff for Operations and Plans’ numbers, the data were not as easily comparable as those presented in table 3. The examples in table 3 are not necessarily representative of all Army growth locations and all categories of arriving personnel, but they nonetheless could lead to unnecessary infrastructure improvements on some bases and inadequate improvements on others. Army headquarters officials explained that they program military construction funds based on their numbers while base-level officials and surrounding communities rely more on the base-level numbers for planning purposes. While we recognize that the numbers of personnel moving to Army growth installations will fluctuate, officials could fully explain the reasons for discrepancies as large as those shown in table 3, and inconsistent numbers can lead to under- or overbuilding by the base and the surrounding communities. Expected personnel movements also can vary based on doctrinal changes that consequently lead to changes in operational unit sizes and organizational structures. For example, BRAC 2005 recommended the creation of certain Army training centers of excellence that consequently require consolidation of some training staff and facilities in certain locations. One such planned center of excellence—the Army Maneuver Center at Fort Benning, Georgia—is to be created through the consolidation of the Armor School and Center (currently located at Fort Knox, Kentucky) with the Infantry School and Center at Fort Benning. This consolidation is expected to lead to personnel movements from Fort Knox to Fort Benning. However, because the organizational framework for the centers of excellence had not been fully defined at the time of our review and was therefore still evolving, Army’s headquarters and Training and Doctrine Command officials still had not reached agreement on the number of people to be assigned to each center. Thus, gaining base officials, such as those at Fort Benning, could not fully plan for incoming personnel movements based on the center’s personnel numbers and associated personnel reductions until the final personnel numbers were approved. Table 4 shows the wide disparity in the proposed personnel reduction numbers and those ultimately approved by the Vice Chief of Staff for the Army in March 2007. Military planners and base operations and community officials require accurate personnel arrival information to ensure that they can effectively plan for and fund infrastructure improvements to provide adequate facilities for the new arrivals. To the extent that personnel numbers are inaccurate, the Army and the surrounding community could either plan for too much or too little space to meet infrastructure requirements. Synchronizing personnel movements with the completion of infrastructure needed to accommodate newly arriving personnel at gaining bases presents difficult challenges that must be overcome to ensure that facilities are ready when relocated personnel arrive. These challenges include developing plans to account for (1) the complexities inherent in coordinating the expected large number of individual movements prompted by BRAC, overseas rebasing, and modularity and (2) the need to manage interdependent BRAC actions affecting individual bases. Moreover, delays in constructing needed infrastructure, for reasons such as environmental assessments on gaining bases, can force delays in carrying out the personnel movements. Given the compressed time frames for completing construction of facilities and subsequently relocating personnel, any significant delays of BRAC actions could place the Army at risk of not completing personnel moves at some locations and not meeting the September 15, 2011, statutory deadline. The Army faces a key challenge stemming from the sheer number of synchronized actions that must take place to successfully complete certain personnel movements. In congressional testimony, the Assistant Secretary of the Army (Installations and Environment) stated that the Army has to complete more than 1,300 discrete actions to successfully implement BRAC recommendations. For example, 14 separate BRAC recommendations involving 59 separate DOD organizations impact Fort Belvoir, Virginia, which is expected to gain nearly 24,000 personnel by September 15, 2011. Among the personnel moving to Fort Belvoir will be about 15,000 expected to arrive as late as August 2011 as the result of the closure of the Walter Reed Army Medical Center in Washington, D.C., to staff a newly constructed hospital, the collocation of various defense agencies and activities from leased space off base, and several National Geospatial-Intelligence Agency moves. These moves all depend on the completion of new construction at Fort Belvoir, much of which is expected to be completed only shortly before or at the same time as the relocations. For example, current plans call for construction to be complete in September 2011 for the collocation of about 9,000 personnel from various defense agencies and activities at the base. However, at the time of our review, the Army had not made a final decision whether to obtain General Services Administration land it owns near rail and transit stations in Springfield, Virginia, where the Army would move these personnel. If this process delays these moves, it could jeopardize meeting the statutory deadline. The Army also has to overcome the challenge to planned synchronization from the interdependence of various BRAC recommendations. For example, the BRAC recommendation to close Fort Monmouth, New Jersey, includes the planned relocation of some personnel into renovated facilities at the Aberdeen Proving Ground, Maryland. However, the designated receiving facilities at Aberdeen cannot be renovated until the military organization currently occupying those facilities—the Ordnance Center and School—relocates. The school, however, cannot relocate—an action associated with another BRAC recommendation—until new space is provided at Fort Lee, Virginia. According to Army officials, the Ordnance Center and School is expected to move to Fort Lee in July 2009, and some personnel from Fort Monmouth are expected to move into the renovated space at Aberdeen in June 2011. Any delay could jeopardize these moves and meeting the September 15, 2011, deadline. Another key synchronization challenge is the need to complete required environmental assessments, conduct any needed environmental cleanup, and undertake endangered species protection before construction commences. For example, construction of the new Maneuver Center of Excellence at Fort Benning, Georgia, could be delayed because the installation is required to account for endangered species protection actions in its construction plans. While the Army initially expected to complete the relevant environmental impact statement by the end of fiscal year 2007, it has revised its expected completion date by about 3 months. Base officials said that this delay will not affect current construction schedules. However, if any further delays materialize, both needed construction and arrival of Armor School and Center personnel from Fort Knox could be delayed. Army officials also told us that other regulatory environmental requirements must be complied with, including certain studies, consultations, and permitting, before various construction projects can commence and any delays could undermine the synchronization schedule of construction and personnel movements that must be completed before the deadline. Synchronization difficulties have already arisen in the Army’s BRAC 2005 plans, and the Army consequently delayed scheduled personnel movements in at least the following three instances because facilities were not expected to be ready at the gaining bases when needed: Fort Benning, Georgia: Officials delayed the start-up of the Maneuver Center of Excellence by a year or more from their initial plans for it to begin operations in fiscal year 2009. Fort Bliss, Texas: The 1st Armored Division’s planned move from Germany was moved from fiscal years 2008, 2009, and 2010 to fiscal years 2010 and 2011. Similarly, a 1st Armored Division brigade relocation has been rescheduled from fiscal year 2007 to fiscal year 2008. Fort Sill, Oklahoma: The Net Fires Center of Excellence is to begin operations in fiscal years 2009 instead of fiscal year 2008 as originally planned. To the extent that delays occur as implementation proceeds, the Army faces an increased risk that it may not complete all closures and realignments by the statutory deadline. The Army is now emphasizing the need to have adequate permanent facilities in place when personnel arrive because utilizing temporary facilities, often referred to as relocatables, adds to the facilities’ cost in the long term as permanent facilities are to eventually replace the relocatables. Army officials have told us that because of congressional concerns regarding the possible use of temporary facilities to meet requirements for 2005 BRAC round and overseas rebasing actions, they do not plan to use relocatable facilities for these moves, even though they would serve as an interim measure for providing needed infrastructure. Nonetheless, in the recent past the Army has relied on temporary facilities to accommodate troops for operational reasons when no permanent facilities were available, as evidenced by the Army’s modularity initiative and facilities construction in Iraq and Afghanistan. Army data indicate that more than 7 million square feet of relocatables have been used to accommodate modular force conversions at a cost of nearly $1 billion since 2004 at domestic bases. Figures 2 and 3 show relocatables in place at Fort Bliss to accommodate the arrival of the 1st Cavalry Division in 2006. Competing priorities could lead to the redirection of funds planned for infrastructure construction or improvement to other priorities and consequently lead to delays in preparing facilities for newly arriving personnel at gaining bases. In September 2006, the Chief of Staff of the Army negotiated directly with the Office of Management and Budget for an increase in the Army’s total fiscal year 2008 budget rather than the usual practice of providing its budget request to the Secretary of Defense. The Army Chief of Staff took this step because he perceived a shortfall of nearly $25 billion in the Army’s fiscal year 2008 budget. However, as a result of the negotiations, the Army received $7 billion more than that originally supported by the Secretary of Defense, but still $18 billion less than the amount the Chief of Staff believed was required to fund all priorities. The Army projects the cost of BRAC implementation to be about $17.6 billion of which military construction is projected to account for about $13.1 billion. The Army plans to fund the $17.6 billion from a variety of sources. First, to help finance portions of the Army’s BRAC 2005 implementation costs, DOD will provide BRAC funding of almost $7 billion. Second, DOD also will provide funding for overseas rebasing, which will supply the Army with about $2.6 billion to fund these redeployment actions to the United States. Together, these amounts will provide the Army about $9.5 billion. Thus, the Army will need about another $8.1 billion to finance BRAC 2005 implementation of about $17.6 billion. To address the shortfall, at the time of our review, the Army planned to rely heavily on funding programmed for certain projects outside the BRAC account―the Military Construction Army Account―through 2011 and to move these targeted projects further into the future. While the Army has identified sources for the funds to implement BRAC 2005, competing priorities could prompt future redirection of funds away from BRAC or other construction. Operations in Iraq and Afghanistan; support for new weapons systems, including the Future Combat System; costs to implement modularity; plans to increase the Army’s active force structure by 65,000 personnel; and other initiatives all will compete for funds with BRAC 2005 and other infrastructure construction priorities. Moreover, cost growth in any of these priorities could increase the pressure to redirect funds. For example, in March 2007, we reported that the Army’s projected cost for the Future Combat System had increased by almost 80 percent from $91.4 billion to $163.7 billion. We also reported that the Office of the Secretary of Defense’s independent estimates of the acquisition cost of the system were higher and ranged from $203 billion to $234 billion. As we have previously reported, concerns have remained regarding the adequacy of funding allocated to maintain DOD infrastructure and support other installation operating needs. Furthermore, underfunding, including the deterioration of facilities and its negative effects on the quality of life for those living and working at affected installations and on their ability to accomplish their mission activities, further affects military operations. This has been particularly prevalent in the Army and in 2004 was exacerbated because varying amounts were redirected from facilities accounts to help pay for the Global War on Terrorism. At the end of fiscal year 2004, Army installations received additional funds to help offset these shortfalls, but the timing made it difficult for the installations to execute these funds. Our visits to various gaining bases revealed that the adequacy of operations and maintenance funds to operate bases continues to be an issue. The Army has had to take steps in each of the last 3 years, affecting facilities accounts, to help fund the war. We are continuing to conduct work in this area and have recently initiated a review looking at the sustainment and operation of DOD facilities. Because of expected budgetary pressures and competing priorities, and to limit short-term construction costs, the Army plans to delay construction of certain quality of life facilities at some gaining installations. Quality of life facilities include child development and youth centers, physical fitness centers, chapels, on-post shopping and convenience areas, and athletic fields. BRAC recommendations do not require specific construction projects, and thus the Army has chosen to defer some quality of life facilities beyond 2011. Specifically, at the nine Army growth installations we visited, the BRAC requirement for quality of life facilities has an estimated value of about $739 million. However, if only certain quality of life facilities are included, then the requirement drops to about $472 million. Nonetheless, the Army planned to fund only about $76 million using BRAC funds and about another $122 million using military construction funds through 2011. As a consequence, for example, at Fort Carson, Colorado, officials requested that two child care centers be constructed before most incoming personnel arrived in 2009. However, the Army has budgeted funding for the two centers in 2011. Moreover, the Army has not budgeted for any quality of life projects at Forts Belvoir and Lee, Virginia, through 2011 despite installation requirements for these facilities. Installation officials we spoke with were confident that their bases could accommodate the new personnel even without all required quality of life facilities and believed that the surrounding communities would be able to accommodate some base personnel’s child care and other quality of life needs. Meanwhile, military family advocates believe that not funding quality of life facilities could jeopardize military readiness by distracting deployed soldiers who may be concerned that their families are not being taken care of. To meet the expected large volume and costs of facilities construction associated with BRAC and the concurrent implementation of overseas rebasing and modularity, most of which must be completed by the end of fiscal year 2011, the Army Corps of Engineers has developed a strategy, known as military construction transformation, intended to reduce (1) construction costs by 15 percent and (2) construction time by 30 percent. Through its transformation strategy, the Corps intends to change how it executes construction projects by standardizing facility designs and processes, expanding the use of manufactured building solutions, executing military construction as a continuous building program and not just a collection of individual projects, and emphasizing commercial rather than government building standards and base master planning. The Army approved the strategy on February 1, 2006, and the established eight centers to simplify the contracting and construction processes for certain types of facilities as a step in its goal to reduce construction costs and time. By 2008, the Corps expects that each center will establish baseline requirements for common facilities to reduce construction costs and time frames on the theory that contractors can build to the same design faster and cheaper once they have experience with the design. The Fort Worth, Texas, district is to standardize enlisted barracks’ construction; the Savannah, Georgia, district is to standardize brigade operations complexes; and the Louisville, Kentucky, district is to standardize operational readiness training complexes. A further cost- saving element of the strategy is to reduce the cost of support facilities, such as utility connections, paved parking and walkways, storm drains, information technology connections, and antiterrorism and force protection measures. According to officials, these costs usually range from 25 to 30 percent of the construction cost when government construction standards are used. In addition to common designs, the strategy encourages contractors to use manufactured buildings with flexibility to use any of five construction types rather than requiring only noncombustible, concrete and steel type I or II construction. Corps officials said that this approach provides not only greater flexibility in the design and construction of military projects but also flexibility to respond to fluctuating material prices. They also noted that using materials other than concrete and steel makes it easier to renovate, reuse, and reconfigure a facility when appropriate. These officials believe that the changes would not significantly reduce the useful life of facilities. Recognizing that its construction strategy constitutes a critical operational change, the Army Corps of Engineers is testing its new approach on projects at five locations. A Corps official told us that 11 projects awarded at the pilot locations during fiscal year 2006 all were bid under the price set by the Corps thus achieving up to a 17 percent savings. Further, these projects were all awarded without scope reductions. Corps officials also told us that the contractors for these projects are expected to complete them in from 440 to 540 days as compared with the normal completion time of about 720 days. In addition, we were told that the Corps hopes to have completed the pilot testing and developed regional contracts for the standardized facility types so that they can be used in the fiscal year 2008 military construction program. According to Corps officials, these contracts will help streamline the construction process because a task order can be issued against an already existing contract when needed. Despite the early positive results, however, Corps officials acknowledge that their strategy has not been tested in high-demand conditions, such as those that will occur because of the much larger construction budgets and extensive construction plans during fiscal years 2008 through 2010. With respect to implementing this transformational construction strategy, building materials cost and labor wage rates that exceed rates used in the construction budget process could lead to unexpectedly costly building projects. In recent years, the actual rate of construction inflation has exceeded the federal government’s inflation rate, which the Corps is required to use in budgeting for its construction projects. While this variance, which was as high as 6.1 percentage points in fiscal year 2004, has diminished over time, the actual rate of construction inflation continues to exceed the Corps’ budgeted rate. Because the Corps uses government inflation rates to develop its cost estimates and budget for construction in any given year, any variance in actual inflation from those rates has an impact on the cost of construction projects. Army Corps of Engineers officials told us that to the extent that the actual rate of inflation continues to exceed the budgeted rate as implementation proceeds and construction material costs are higher than anticipated, they would either have to redirect funding from other sources to provide for construction projects or resort to a reduction in the scope of some construction projects. We note, however, that this trend may not necessarily continue into the future, depending on the economics surrounding the construction industry. Finally, the Corps is expected to manage an unprecedented volume of construction through 2011, with some regions expecting to far exceed their normal construction capacity. For example, the Fort Worth, Texas, district is to manage about $2.5 billion in construction work at Fort Bliss. Corps officials said that this amount would place them at their maximum capacity of about $400 million in construction work annually managed by that district, and they must also manage $2 billion in construction in the San Antonio, Texas, area. Similarly, at Fort Benning, Georgia, the Corps’ military construction budget will increase from about $50 million per year to more than $300 million annually over the next 3 years. These costs do not include the costs of the Corps’ civil works program, which would include construction programs such as recovering from the impacts of Hurricane Katrina. Corps headquarters officials have said that they will bring in assistance from other Corps districts, hire outside help if specific districts are unable to meet the demand without such help, or both thus potentially adding additional cost to the projects. Communities surrounding growing Army installations have acted to address school, housing, transportation, and other infrastructure needs, but each community’s actions are unique because demands vary by location. These communities are in the process of identifying and obtaining funding sources to finance these additional infrastructure requirements, and they have several ways to finance these needs, including seeking federal assistance. Nonetheless, given the evolving nature of the Army’s growth projections, these communities have generally been hindered in their ability to identify all of the costs for implementing infrastructure expansion. DOD’s OEA is assisting communities with their growth planning efforts, but OEA does not provide funding for facilities’ construction projects. Communities we visited have been planning for a significant anticipated increase in DOD personnel and family members from BRAC, overseas rebasing, and modularity implementation. These communities’ schools, housing, transportation, and other infrastructure needs depend on the number of new personnel to be assigned to their local bases. However, the Army’s plans for relocating its personnel and families were evolving at the time of our review. Consequently, communities cannot fully determine their requirements because of changing Army plans, although some communities had begun planning anyway. Some went further and undertook new construction even before the Army had decided which bases would grow and by how much. For example, some communities in the Forts Benning, Bliss, and Riley areas expanded medical facilities in anticipation of DOD and community growth unrelated to DOD. Similarly, through a pre-BRAC 2005 partnership between Fort Bliss and the City of El Paso, an inland desalination plant is under construction on the installation that is to be operated by the City of El Paso. Community officials also told us that they are increasing the capacity and accreditation of child care facilities to help accommodate relocating families’ needs. At the time of our review, base commanders’ representatives and affected communities’ officials at locations we visited were regularly collaborating to manage community growth. In addition to housing, schools, and transportation needs, community officials were planning new water and sewage systems projects to accommodate growth. For example, the community bordering Fort Sill has identified $14.7 million in water and sewer projects and is seeking state and federal financial assistance to finance the projects. State-level or regional task forces have also been formed in some states to assist communities surrounding bases in managing the growth. Communities surrounding Army installations in Georgia, Texas, Kansas, Maryland, and Virginia have organized such task forces to help identify and address off-base infrastructure needs from a regional viewpoint. Members of these planning groups include elected or appointed representatives from the state, local, and county levels and representatives from local businesses, school districts, and the private sector. Local officials in some of the communities we visited also said that the arrival of defense personnel and family members is expected to occur later than initially projected, thus giving them more time to plan for and complete new construction. At the same time, some communities’ officials were concerned that Army plans could change and that their nearby bases might not grow as much as first thought, but they would not find out until after new construction had been started or completed. The communities we visited have been actively planning and have initiated a number of actions to accommodate the anticipated increased need for schools, housing, and transportation. Some communities have taken steps to address school needs by funding school construction requirements, while others are seeking federal assistance. To help accommodate increased housing demands, communities have constructed or developed plans for constructing additional housing units. Some communities already have transportation projects under way or planned, while other communities have identified needed projects but lack the funding to make these road expansions and are seeking state and federal assistance. Some school systems in communities surrounding gaining Army installations plan to expand their facilities to accommodate the anticipated increase in school enrollments, although such planning is hampered by evolving Army base growth plans. While the Office of the Secretary of Defense estimates that Army actions will lead to the transfer of about 50,000 school children into school districts surrounding gaining installations, fluctuating Army numbers hamper communities’ planning. For example, Fort Benning officials projected that student enrollment would increase by about 10,000 through fiscal year 2011 as compared to a November 2006 DOD report that estimates 600. A number of reasons accounted for the variances, including differences in the scope (e.g., defense personnel versus defense and nondefense personnel) of the projected arrivals and assumptions underlying the projections for family dependents related to those arriving personnel. At the time of our review, these disparities remained unresolved. There are a number of installations, in addition to Fort Benning, for which base projections differ from those generated by the Army. Forts Benning and Riley officials told us that they have been in direct contact with the units that will be moving to the bases and consequently believe that their own estimates more accurately reflect impending growth than those by Army headquarters. As a result, Forts Benning and Riley officials are relying on their own estimates and communicating them to local officials for use in their school construction planning. Financing school construction is a key challenge confronting officials in communities surrounding growth bases, and these officials have adopted a variety of strategies. For example, Forts Bliss and Riley area school systems have passed bonds to expand their schools’ capacity. The community surrounding Fort Bliss approved bonds totaling over $600 million for school construction intended to serve an increased student population of about 14,900. In addition, one community surrounding Fort Riley passed a $33 million school bond to finance a new 1,100-student middle school, a new 400-student elementary school, and the expansion of existing elementary schools. Another school system near Fort Riley decided to keep a school open that was to close. In addition to bonds, some school systems are seeking federal assistance. For example, local officials in the community adjacent to Fort Benning estimate that they need about $321 million to support incoming students and are seeking federal assistance. Moreover, the school systems near Forts Benning, Bliss, Carson, Lee, Riley, and Sill have formed the Seven Rivers National Coalition and were subsequently joined by the school systems near the Aberdeen Proving Ground; Forts Bragg, Knox, Leonard Wood, and Meade; and the Redstone Arsenal. The coalition has petitioned for construction funding from DOD, the Department of Education, and Congress and believes that it needs about $2 billion to support incoming students. However, DOD’s position has been to provide planning assistance through OEA to communities affected in prior BRAC rounds but not construction financing. Similarly, the Department of Education indicated that no funding is available. In addition to construction funds, school districts will also need additional operating funds to run the new schools. Congress provided $7 million in fiscal year 2006 to help operate school systems affected by DOD transfers, and DOD distributed the money to 26 school systems in 14 states. School systems having a 20 percent enrollment of military or DOD civilian dependent children are eligible for this assistance if this population has increased or decreased by 5 percent, or by 250 students, and the increase or decrease is the result of DOD transfers. For fiscal year 2007, Congress provided $8 million. To accommodate the anticipated demand for housing in communities surrounding gaining bases, residential developers and community planners are planning and constructing new housing. For example, officials from the communities surrounding Fort Riley, a multicounty area with fewer than 150,000 people, project that they will need from 8,000 to 9,000 additional housing units to accommodate the increase in personnel and family members relocating to the area. These off-base housing units are in addition to the 400 new on-base homes being added to the existing base inventory of 3,114. Developers in the communities surrounding Fort Riley had also already started construction or had construction plans for about 6,000 new units. Also, the Department of Agriculture’s housing loan program dedicated $25 million in fiscal year 2006 for loan assistance to personnel relocating to the Fort Riley area. Under this program, approved lenders provide qualifying low- and middle-income rural residents with home financing options, including no-down-payment loans to create homeownership opportunities. In December 2005, the department opened an office at Fort Riley to assist these potential homebuyers with off-post housing needs. The department also plans to establish a similar partnership at Fort Leavenworth, Kansas, and to provide similar assistance to potential homebuyers relocating there. In contrast to the rural environment at Fort Riley, Fort Bliss is near El Paso, Texas, with a metropolitan population of about 750,000. El Paso community officials told us that they are not as concerned about housing because they believe that their market is large enough to absorb the influx of new personnel. At the same time, according to an Army official, a recently completed draft housing market analysis for Fort Bliss identified an additional on-base housing requirement of about 3,370 to be paid for using housing privatization funding if this requirement is approved and funding is provided. Restationing of defense personnel at some gaining bases is likely to prompt new transportation infrastructure construction in communities surrounding the bases. Some state and local governments had already begun planning for or started construction projects at the time of our review. For instance, according to community officials, projects already started or planned to be started are expected to cost (1) $60 million in the Fort Riley area, (2) $45 million in the Fort Carson area, and (3) $150 million in the Fort Bliss area. Also, Fort Sill community representatives said that they have identified road expansion projects totaling approximately $25 million, and because of limited local funding, they are seeking state and federal financing assistance. At the same time, community officials at Forts Lee and Sam Houston told us that their respective state departments of transportation are examining plans for road expansion projects near these installations. Transportation needs and funding will also be a concern in large metropolitan areas surrounding gaining installations because of both rapid growth being experienced apart from DOD-prompted growth and the influx of personnel onto gaining bases. For example, at Fort Belvoir, Virginia, personnel increases are expected to exceed 20,000 and this anticipated growth will further burden an already congested northern Virginia transportation system. A working group that includes representatives from the Army, the Virginia Department of Transportation, Fairfax County, and the Federal Highway Administration has been established to review the transportation impacts of the Fort Belvoir realignment. A preliminary list of transportation projects around Fort Belvoir totaling about $663 million has been identified as necessary to help accommodate the expected increase in traffic. Although representatives from the local, state, and federal governments recognize that transportation system improvements are needed, no funding sources or commitments had been identified at the time of our review for projects totaling approximately $458 million of the total of $663 million. To help facilitate some of these specific road construction projects surrounding Fort Belvoir, the John Warner National Defense Authorization Act for Fiscal Year 2007 included a provision allowing the Army to enter into a special agreement with the State of Virginia for certain land conveyance and road construction around Fort Belvoir. The Defense Access Road Program is a potential source for helping to pay for public highway improvements. Recent developments could affect proposed transportation projects and the timing of the move to Fort Belvoir because, at the time of our review, the Army was making a decision whether to obtain land owned by the General Services Administration near rail and transit stations in the Springfield, Virginia, area where it would move approximately 9,000 personnel. In prior BRAC rounds, OEA, part of the Office of the Deputy Under Secretary of Defense (Installations and Environment), has provided technical and financial planning assistance but not construction funds to communities through its grants. According to an OEA official, in the prior four BRAC rounds, OEA assisted over 100 communities. In our January 2005 report on the status of the prior BRAC rounds, we reported that OEA, the Department of Labor, the Economic Development Administration within the Department of Commerce, and the Federal Aviation Administration provided nearly $2 billion in assistance through fiscal year 2004 to communities and individuals for base reuse planning, airport planning, job training, infrastructure improvements, and community economic development, and these agencies are slated to perform similar roles for the 2005 BRAC round. DOD sponsored a BRAC conference in May 2006 attended by state, local, and federal agencies and BRAC-affected communities to discuss BRAC impacts, including growth. The conference provided an opportunity for communities to discuss issues with officials from OEA and other federal entities that are part of the President’s Economic Adjustment Committee, which helps communities plan for and prepare for growth. In assisting communities with their growth plans, during fiscal year 2006, OEA awarded growth-related grants totaling approximately $3.2 million to seven communities surrounding Army installations, and as of April 30, 2007, has awarded 11 fiscal year 2007 Army growth-related grants totaling approximately $8.8 million to 10 communities surrounding Army installations and to the State of Kansas. Table 5 provides a listing of these OEA Army growth-related grants by fiscal year and amount. We are continuing to examine the combined effect of BRAC, overseas rebasing, and Army modularity on communities surrounding military installations as a result of language in the House Committee on Appropriations report accompanying the Department of Defense Appropriations Act 2007. We expect to provide the results of that work in the spring of 2008. Continuing Army operations in Iraq and Afghanistan and the war on terror and evolving BRAC 2005, overseas rebasing, and force modularity plans have resulted in fluctuating and uncertain personnel restationing plans. Knowing how many Army personnel and dependents will move to a given base and their arrival dates is fundamental to the base’s and surrounding community’s abilities to plan for and provide adequate on- and off-base schools, housing, transportation, and other infrastructure. However, as of March 2007, several of the Army’s largest gaining bases and Army headquarters-level offices had yet to agree as to the number of arriving and departing personnel because officials were unaware of the specific causes of the variances in their estimates. For their part, communities surrounding gaining bases generally relied on their local base officials for personnel arrival and departure numbers, which in effect, can be translated into the communities’ off-base infrastructure requirements. However, without knowing whether the local base or Army headquarters- level officials’ plans have accurate information about growth plans, these communities are not well positioned to plan for and provide adequate schools, housing, transportation, and other infrastructure. To better facilitate infrastructure planning, we recommend that the Secretary of Defense direct the Secretary of the Army to (1) determine why there are differences between headquarters and gaining bases with respect to the number of arriving and departing personnel and (2) ensure that Army headquarters and base officials are collaborating to agree on Army personnel movement plans so that base commanders and surrounding communities can effectively plan for expected growth. This collaboration to reach agreement should continue as expected personnel movement actions are revised over time. In commenting on a draft of this report, DOD partially concurred with both of our recommendations. With regard to the first recommendation, DOD concurred with our findings but said that the Army had determined the cause of differences between the headquarters and gaining bases’ numbers of arriving and departing personnel. As a result, the Army said that in January 2007 it had taken corrective action by establishing the Army Stationing Installation Plan (ASIP) as the single, unified source of installation planning population data to be used Army-wide. However, the information in our report was based on March 2007 ASIP data, which continued to show that all of the nine installations we visited were using different numbers than headquarters was using. With regard to the second recommendation, DOD also concurred with our findings but said that the Army had already taken corrective action without the need of direction from the Secretary of Defense. The Army stated that in May 2007 it issued guidance that allowed installations to plan for anticipated unit moves that may not be reflected in the ASIP and to discuss these plans with local communities as long as they are appropriately qualified as predecisional and subject to change. Army officials also stated that in June 2007, they would ensure that installations forward all population issues, stationing issues, or both to Department of the Army headquarters for resolution. Following receipt of DOD’s comments on our draft report in late August 2007, we contacted several of the bases we visited during our review and found that there were still some significant, long-standing problems with the variances in the data being used by the installations and headquarters. In some cases the magnitude of the differences has been reduced, but there are still several cases in which the differences exceed 1,000 personnel. For example, we were told that Fort Bliss still expects more than 1,000 military personnel than are currently projected by headquarters. To the Army’s credit, most of the officials we spoke with at the installation level said the data were improving, with one location reporting that its data were very close to that of the headquarters. However, officials at six of the seven installations we contacted still said that they had serious concerns with the headquarters data. Because disconnects still exist, we believe that our recommendations remain valid and that the Secretary of Defense should act upon both of our recommendations. We are sending copies of this report to other interested congressional committees; the Secretaries of Defense and the Army; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4523 or at leporeb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To determine the challenges and associated risks the Army faces in providing for timely infrastructure support at its major gaining bases because of the combined effects of implementing the 2005 round of base realignment and closures (BRAC), overseas rebasing, and Army force modularity actions, we analyzed infrastructure-related planning documentation and discussed planning and related funding efforts with officials from various Army headquarters-level offices, four regional Installation Management Command offices, and nine installations. We visited Fort Carson, Colorado; Fort Benning, Georgia; Fort Riley, Kansas; Fort Meade, Maryland; Fort Sill, Oklahoma; Fort Bliss and Fort Sam Houston, Texas; and Fort Belvoir and Fort Lee, Virginia, because preliminary data indicated large influxes of military and civilian personnel through fiscal year 2011 at these nine installations. Because Army implementation plans were evolving as we conducted our work, we periodically updated the information we collected as the Army refined its plans. In examining the plans and identifying challenges that could place the Army at risk of not providing the necessary infrastructure to accommodate incoming personnel in a timely manner, we focused our efforts on key elements of the planning process, including planned personnel restationing actions and synchronization of multiple actions affecting particular installations, infrastructure requirements to include quality of life facilities, and military construction plans and expected costs. At the installation level, we collected and analyzed data on the estimated number of personnel arrivals and departures by fiscal year, along with installation-developed military construction requirements. We also analyzed installation-developed requirements for quality of life facilities at the nine Army growth bases we visited and compared these requirements to Army funding plans for fiscal years 2006 through 2011. We also met with Army Corps of Engineers’ officials and discussed the challenges they face in providing an unprecedented volume of military construction across the country at gaining installations within allotted costs and time frames. Although we did not validate military construction requirements, the Army Audit Agency was validating the requirements at the time of our review for selected Army BRAC 2005 military construction projects. We also sought views from officials from the installations we visited as to the challenges they faced in planning for and funding their personnel growth requirements and their ability to fully fund continuing base operations and support and maintenance activities as the installations expand. At the Army headquarters level, we collected personnel restationing movement data and discussed overall infrastructure implementation plans for the expected growth installations. We further discussed the Army’s efforts to fully fund necessary infrastructure in the face of recognized overall funding challenges across the Army’s programs. To determine how communities surrounding the Army’s gaining bases were planning for and funding the necessary infrastructure to support incoming personnel and their families, we contacted community leaders during our installation visits and discussed their relationships with installation officials and steps they were taking to address community infrastructure issues as a result of expected increased defense-driven personnel growth and non-Department of Defense (DOD) growth in their communities. While we focused most of our efforts on such areas as the availability of housing, schools, and transportation to accommodate the expected growth, we also learned of other areas of concern, including the adequacy of utilities. We collected and analyzed available relevant community planning documents relating to growth impacts and specific strategies and actions for addressing these impacts. Because the federal government has a role in providing financial, technical, and other assistance to communities affected by defense actions, we discussed with community officials to what extent they were seeking federal assistance in addressing growth issues. We further discussed community growth issues with officials from the Office of Economic Adjustment (OEA), an organization within DOD that provides technical assistance and financial assistance in the form of grants to eligible communities affected by defense actions. We also attended the May 2006 DOD-sponsored BRAC conference to learn about the ramifications of DOD growth on communities and the federal support and assistance available to these communities. We further collected and analyzed OEA grant data already provided to affected growth communities and discussed in general with OEA officials the activities of other federal agencies that are included in the President’s Economic Adjustment Committee, a committee of 22 federal agencies that have varying roles in providing assistance to communities adversely affected by defense activities. We did not conduct work at those other federal agencies. In addition to representatives of the nine domestic Army gaining installations we visited and nearby community leaders, we contacted the following organizations during our review: Office of the Secretary of Defense Office of the Deputy Under Secretary of Defense (Installations and Environment), BRAC Office, Arlington, Virginia Office of the Deputy Under Secretary of Defense for Military Community and Family Policy, Arlington, Virginia OEA, Arlington, Virginia Assistant Chief of Staff for Installation Management, Arlington, Virginia Deputy Chief of Staff for Operations and Plans, Arlington, Virginia Installation Management Command Headquarters, Arlington, Virginia Installation Management Command, Northeast Region, Hampton, Virginia Installation Management Command, Northwest Region, Rock Island, Illinois Installation Management Command, Southeast Region, Atlanta, Georgia Installation Management Command, Southwest Region, San Antonio, Texas Army Corps of Engineers Headquarters, Washington, D.C. Training and Doctrine Command, Hampton, Virginia Military Surface Deployment and Distribution Command, Newport Our analysis was complicated by the evolving nature of the Army’s infrastructure implementation plans, which continued to change throughout our review. Business plans intended to direct the implementation of the BRAC recommendations affecting the gaining bases were in draft at the time of our review. Army officials said that the information they provided to us and that we present in our report represented their current plans at the time of our review and should be considered an approximation of their projected restationing and funding actions because these plans are subject to change. Consequently, civilian planning for providing infrastructure was subject to change based on changes in the Army’s plans. Although we found some discrepancies in the Army’s data, we concluded that, overall, they were sufficiently reliable for the purposes of this report. We conducted our review from March 2006 through July 2007 in accordance with generally accepted government auditing standards. In addition to the contact named above, Barry W. Holman, Director (retired); James R. Reifsnyder, Assistant Director; Nelsie S. Alcocer; Grace A. Coleman; Nancy T. Lively; Richard W. Meeks; David F. Nielson; and Roger L. Tomlinson made major contributions to this report. Military Base Realignments and Closures: Plan Needed to Monitor Challenges for Completing More Than 100 Armed Forces Reserve Centers. GAO-07-1040. Washington, D.C.: September 13, 2007. Military Base Realignments and Closures: Observations Related to the 2005 Round. GAO-07-1230R. Washington D.C.: September 06, 2007. Military Base Closures: Projected Savings from Fleet Readiness Centers Likely Overstated and Actions Needed to Track Actual Savings and Overcome Certain Challenges. GAO-07-304. Washington, D.C.: June 29, 2007. Military Base Closures: Management Strategy Needed to Mitigate Challenges and Improve Communication to Help Ensure Timely Implementation of Air National Guard Recommendations. GAO-07-641. Washington, D.C.: May 16, 2007. Defense Acquisitions: Future Combat System Risks Underscore the Importance of Oversight. GAO-07-672T. Washington, D.C.: March 27, 2007. Military Base Closures: Opportunities Exist to Improve Environmental Cleanup Cost Reporting and to Expedite Transfer of Unneeded Property. GAO-07-166. Washington, D.C.: January 30, 2007. Defense Management: Comprehensive Strategy and Annual Reporting Are Needed to Measure Progress and Costs of DOD’s Global Posture Restructuring. GAO-06-852. Washington, D.C.: September 13, 2006. Defense Infrastructure: DOD’s Overseas Infrastructure Master Plans Continue to Evolve. GAO-06-913R. Washington, D.C.: August 22, 2006. Force Structure: Capabilities and Cost of Army Modular Force Remain Uncertain. GAO-06-548T. Washington, D.C.: April 4, 2006. Force Structure: Actions Needed to Improve Estimates and Oversight of Costs for Transforming Army to a Modular Force. GAO-05-926. Washington, D.C.: September 29, 2005. Military Bases: Analysis of DOD’s 2005 Selection Process and Recommendations for Base Closures and Realignments. GAO-05-785. Washington, D.C.: July 1, 2005. Defense Infrastructure: Opportunities Exist to Improve Future Comprehensive Master Plans for Changing U.S. Defense Infrastructure Overseas. GAO-05-680R. Washington, D.C.: June 27, 2005. Defense Infrastructure: Issues Need to Be Addressed in Managing and Funding Base Operations and Facilities Support. GAO-05-556. Washington, D.C.: June 15, 2005. Military Base Closures: Updated Status of Prior Base Realignments and Closures. GAO-05-138. Washington, D.C.: January 13, 2005.
The Army expects significant personnel growth, more than 50 percent in some cases, at 18 domestic bases through 2011 because of the effect of implementing base realignment and closure (BRAC), overseas force rebasing, and force modularity actions. This growth creates the need for additional support infrastructure at these bases and in nearby communities. Military construction costs of over $17 billion are expected for new personnel, and communities will incur infrastructure costs as well. GAO prepared this report under the Comptroller General's authority to conduct evaluations on his own initiative. It addresses (1) the challenges and associated risks the Army faces in providing for timely infrastructure support at its gaining installations and (2) how communities are planning and funding for infrastructure to support incoming personnel and their families. GAO analyzed personnel restationing numbers, discussed planning efforts with Army and community officials, and visited nine of the larger gaining bases and nearby communities. The Army has developed plans to accommodate the growth of about 154,000 personnel at its domestic bases, but it faces several complex implementation challenges that risk late provision of needed infrastructure to adequately support incoming personnel. First, Army plans continue to evolve, and Army headquarters and each of the nine gaining bases we visited were relying on different numbers of personnel movements and were not fully aware of the causes for the variances. For example, Fort Benning officials expected more than 6,000 additional soldiers and military students than Army headquarters planned. Because consistency in the relocation numbers is important for properly determining not only base infrastructure support needs but those of nearby communities as well, inconsistent numbers could lead to an improperly sized facilities' infrastructure. Second, the Army faces challenges in synchronizing personnel movements with planned newly constructed on-base infrastructure improvements. Any significant delays in implementing planned actions could place the Army at risk of not meeting BRAC statutory deadlines. Third, competing priorities could lead the Army to redirect resources planned for needed infrastructure improvements and operations to such priorities as current operations in Iraq and Afghanistan, as has happened in the past. However, such redirection of resources could undermine the Army's ability to complete infrastructure improvements in time to support personnel movements and to meet planned timelines. Fourth, the Army Corps of Engineers, the primary construction agent for the Army, must manage an unprecedented volume of construction, implement a new construction strategy designed to save construction costs and time, and complete infrastructure improvements within available resources and planned timelines. The Army recognizes these challenges and is refining its implementation plans to overcome these challenges. While communities surrounding growth bases GAO visited have generally proactively planned for anticipated growth, they have been hindered in fully identifying additional infrastructure requirements and associated costs by the evolving nature of the Army's plans and different interpretations of the plans. For example, while Army officials at Fort Benning, Georgia, project an influx of about 10,000 school-age children, the Department of Defense's (DOD) November 2006 figures project only about 600. At the time of our review, these disparities remained unresolved. Communities surrounding growth bases have their own unique infrastructure improvement needs, such as schools, housing, or transportation, based on (1) the number of personnel to actually move to the nearby base, (2) the community's current capacity in its area(s) of need, and (3) the community's own capacity to finance additional infrastructure requirements and the availability of federal or state assistance to finance these needs. Some communities had already sought federal and state assistance to help finance construction efforts at the time of GAO's review even though the evolving nature of the Army's planning prevented the communities from having reasonable assurance that they knew the full scope of their infrastructure requirements.
FDA is responsible for helping to ensure the safety and efficacy of drugs marketed in the United States. It does this by overseeing the drug development process, reviewing applications for the marketing of new drugs, and monitoring the safety and efficacy of drugs once they are marketed. A growing body of literature has demonstrated that in responses to some drugs there are medically important sex differences that require the participation of women in clinical trials for new drugs. In the 1970s, FDA recommended the exclusion of women of childbearing potential from early clinical drug trials because of concerns for the health of the women and of their potential offspring. FDA, an agency in the Department of Health and Human Services, is charged with helping to ensure that safe and effective food, drugs, medical devices, and cosmetics reach the United States market. FDA assists drug manufacturers in designing clinical drug trials, reviews proposals for conducting clinical drug trials, and approves drugs for sale in the United States based on its determination that the clinical benefits of a drug outweigh its potential health risks. FDA also approves drug labeling, which indicates the medical conditions and patient populations for which the drug has been tested and approved as safe and effective. Once a drug reaches the market, FDA continues to monitor its safety and efficacy. Before any new drug can be tested on humans, a drug’s sponsor must submit an investigational new drug (IND) application to FDA that summarizes the investigations conducted prior to trials in humans, lays out a plan for how the drug will be tested in humans, and provides assurances that appropriate measures will be taken to protect study participants. Specifically, the IND application demonstrates that the drug is reasonably safe for subsequent testing in humans based on laboratory and animal testing and exhibits enough potential effectiveness to justify its commercial development. Unless FDA determines that a proposed study is unsafe, clinical testing may begin 31 days after the IND application is submitted to FDA. The sponsor then proceeds with the three main stages of clinical drug testing: Phase 1 small-scale safety trials generally study small numbers of healthy volunteers to determine toxicity and safe dosing levels. These trials also study a drug’s pharmacokinetics, or how it is absorbed, distributed, metabolized, and excreted, and its concentration in the bloodstream; Phase 2 small-scale efficacy trials generally study patient volunteers with the disease or condition against a comparison group to assess drug efficacy and side effects; and Phase 3 full-scale safety and efficacy trials study thousands of patient volunteers against a comparison group to further evaluate efficacy and monitor adverse responses to the drug. Drugs for life-threatening diseases for which there is no other effective course of treatment sometimes cannot be compared against another treatment and will sometimes use historical information about patient outcomes as a point of comparison. Drug sponsors are required to submit IND annual reports to FDA during the typically 2- to 10-year span of the clinical drug trials. When the sponsor wants to market a new drug, it submits a new drug application (NDA). FDA regulations on NDA content and format require that the NDA include integrated summaries of the evidence demonstrating the drug’s safety, including adverse events suffered by those in the clinical drug trials, and effectiveness. Evidence is also required to support the dosing section of the labeling, including the recommended dose and modifications in dose for specific population subgroups. Each NDA must include at least one pivotal clinical trial, generally an “adequate and well-controlled” Phase 3 study that demonstrates the drug’s efficacy, or effectiveness. There are many examples in the medical literature of sex differences in the way men and women absorb, distribute, and metabolize drugs. The effects of women’s hormones and the variations in body size between men and women are the likely causes of many sex differences in responses to drugs. Women metabolize some drugs differently if they are pregnant, lactating, pre- or postmenopausal, menstruating, or using oral contraceptives or hormone replacements. Women’s generally smaller body weight compared to men can result in higher levels of drug concentration in the bloodstream. These and other established physiological and anatomical differences may make women differentially more susceptible to some drug-related health risks and demonstrate the importance of including women in all stages of drug development. For example, phenylpropanolamine (PPA), a common ingredient in over-the-counter (OTC) and prescription cough and cold medications and OTC weight-loss products, was found to increase the risk of bleeding into the brain or tissue around the brain in women, but not in men. Certain classes of drugs can in some circumstances prolong the interval between the heart muscle’s contractions and induce a potentially fatal cardiac arrhythmia. Women have a higher incremental risk of suffering such an arrhythmia after taking these drugs than do men, probably because (1) the interval between heart muscle contractions is naturally longer for women than for men and (2) male sex hormones moderate the heart muscle’s sensitivity to these drugs. We recently reported that four of the ten prescription drugs withdrawn from the U.S. market in the last 3 years posed a greater health risk to women than to men because they induced arrhythmia. Similarly, there is evidence that not all drugs are effective in both sexes. For example, one class of painkillers, kappa opioids, has been found to be twice as effective in women as in men. Discoveries of birth defects and other problems resulting from fetal exposure to certain drugs between the 1940s and early 1970s prompted societal interest in protecting women and their fetuses from the potentially devastating effects of clinical drug research. For example, diethylstilbestrol (DES) was taken by women in the 1940s and 1950s to protect against miscarriages. About 20 years later, many daughters of women who had taken the drug developed reproductive abnormalities and had an increased risk of developing vaginal cancer. Similarly, in the 1960s many women outside of the United States took thalidomide to prevent early miscarriages, and the drug caused over 10,000 birth defects worldwide. In 1977, partially in response to the thalidomide scare, FDA recommended that women of childbearing potential be excluded from participating in small-scale safety and efficacy trials unless the drug was intended to treat a life-threatening disease. As a result, women were typically excluded from these clinical drug trials. Through the next decade there were growing concerns that the 1977 guideline may have restricted the early accumulation of information about women’s responses to drugs that could be used in designing later clinical drug trials and that it stifled the production and analysis of data on the effects of drugs in women. In 1994, the Institute of Medicine (IOM) reported that the FDA guidance that discouraged the participation of women of childbearing potential in initial small-scale trials led to the widespread exclusion of women in later large scale trials. In addition, analyses of published clinical drug trials for life-threatening conditions have concluded that many past clinical trials included few or no women, making it uncertain whether the studies’ results applied to women. These conditions include cardiovascular disease and HIV. This report is our second to address FDA and women in clinical drug trials. In 1992, we investigated FDA’s policies and the pharmaceutical industry’s practices regarding research on women in clinical drug trials. We reported that women were generally underrepresented in clinical drug trials in comparison to the proportion of women among those persons with the disease for which the drug was intended and that sex-related analyses were not routinely conducted. Even so, there were enough women in most clinical drug trials to detect sex differences in men and women’s response to drugs. FDA has conducted its own studies on the inclusion of women in clinical drug trials. Surveys of NDAs in 1983 and 1988 found that, in general, both sexes were represented in clinical drug trials in proportions that usually reflected the prevalence of the disease in the total population but were not necessarily statistically sufficient to prove the safety or efficacy of the drug for either sex. Despite the participation of women, few analyses of the data were being conducted to detect possible sex differences in drug safety or efficacy. FDA has also looked at the tabulation of demographic data in IND annual reports. FDA recently reported that in IND annual reports filed with the agency women made up 44 percent of participants in clinical drug trials in which sex was identified. However, the FDA researchers found that sex could not be determined for more than one half of the participants in the IND annual reports. FDA has addressed women in clinical drug trials through the publication of guidance in 1993 and regulations in 1998 and 2000. The 1993 guidance for the pharmaceutical industry recommends that clinical studies include men and women “in numbers adequate to allow the detection of clinically significant gender differences in drug response” and that analyses of sex differences be included in NDAs. The 1998 regulation is less specific. It does not include references to how the number of women to be included in clinical drug trials should be determined. It requires only that safety and efficacy data already collected be presented separately for men and women in NDAs, but it does not require any discussion or analysis of these data. The 1998 regulation also requires the tabulation of study participants by sex in IND annual reports. The regulations issued in 2000 allow FDA to temporarily halt research programs for drugs for life-threatening conditions if men and women with reproductive potential are excluded from participation in ongoing studies. In response to our 1992 report, FDA issued policy guidance in 1993 regarding women in clinical drug trials, explicitly reversing its 1977 recommendation to restrict some women’s participation in drug development. Its 1993 Guideline for the Study and Evaluation of Gender Differences in the Clinical Evaluation of Drugs recommended that clinical drug trials should, in general, reflect the population that will receive the drug when it is marketed. This guidance also advised that enough men and women be included in clinical drug trials to allow for the detection of clinically significant sex differences in drug response, including those differences attributable to hormones and body weight variations. On August 10, 1998, FDA implemented regulations amending requirements for INDs and NDAs to include demographic data. The regulation requires sponsors to tabulate the sex, age, and race of study participants in IND annual reports and to present available safety and efficacy data by sex, age, and race in two NDA documents submitted to FDA: the Integrated Summary of Safety and the Integrated Summary of Efficacy. The regulation also requires that evidence be presented to support dose determinations. FDA has the authority under these regulations to refuse to accept, or “file,” any NDA for review that does not include this information. In addition, FDA promulgated regulations on June 1, 2000, allowing it to halt IND studies involving drugs that are intended to treat life-threatening diseases or conditions if men or women of reproductive potential are excluded from participation solely because of risks to their reproductive potential. This regulation does not, however, impose requirements to recruit or enroll a specific number of men or women with reproductive potential, and FDA has not halted any studies under this authority. We did not evaluate whether FDA should have invoked this rule. The language of the 1998 demographic regulation is less specific than the 1993 guidance. The 1998 regulation has the force and effect of law, while the 1993 guidance does not legally bind either FDA or drug sponsors. The 1993 guidance specifically discusses the need to analyze clinical data by sex, evaluate potential sex differences in pharmacokinetics, including those caused by body weight, and conduct specific additional studies in women, where clinically indicated. The 1998 regulation requires the presentation of safety and efficacy data already collected in the NDA by sex, but no analysis of such data is required. The regulation does not include a standard for the inclusion of women; it requires only “presentation of data” without clarifying the extent of data or the format to be used. The regulation does require the identification of any modifications in dose or dose interval because of sex, age, or race, but not weight. We found that the NDA summary documents and IND annual reports submitted to FDA by drug sponsors frequently did not present information already collected during drug development separately for men and women, as required by the 1998 regulation. We found that 33 percent of the NDAs in our sample did not include presentations of both safety and efficacy outcome data separately for men and women. Similarly, we found that 39 percent of the IND annual reports in our sample did not include the required information about the sex of study participants. One-third of the NDAs we examined did not include presentations for men and women of both safety data in the Integrated Summary of Safety and of efficacy data in the Integrated Summary of Efficacy. We considered the presentation of outcome data by sex in an NDA for just one of the studies included in that NDA to meet our criteria for regulatory compliance. Safety outcome data by sex, either data about toxicity or adverse events or both, were not included in 17 percent of the NDAs we reviewed. Similarly, 22 percent of the NDAs did not present efficacy outcome data separately for men and women. We found that 39 percent of the IND annual reports in our sample did not include the demographic information required by regulation: 15 percent of the annual reports were not submitted to FDA and 24 percent did not tabulate the number of men and women enrolled in clinical drug trial studies. Only 37 percent of the annual reports tabulated the enrolled study populations by sex, as required by the 1998 regulations; 24 percent of the annual reports stated that there were no ongoing studies. All of the NDAs we examined included enough women in the pivotal trials to demonstrate statistically that the drug was effective in women, even if the sponsors did not report such an analysis or did not include the required presentation of outcome data in the NDAs. Overall, more women than men participated in clinical trials for the drugs we examined, although women were a minority of the participants in the initial, small- scale safety studies used to set the dosing levels for subsequent trials. We found that most of the NDAs included analyses to detect differences between men and women, but fewer of the NDAs explicitly included descriptions of both safety and efficacy analyses that compared women taking the drug with a comparison group of women taking a placebo or an alternative treatment. Analyses often detected sex differences. The sex differences that were detected were sometimes attributed to differences in body weight between men and women; none of the sex differences that were detected were judged to be clinically relevant, even when statistically significant. The NDA sponsors did not recommend different dosage levels for men and women based on the sex differences they detected. All of the NDAs in our sample included enough women in the pivotal trials to demonstrate statistically that the drug was effective in women; that is, the numbers of women in the treatment and comparison groups of the pivotal studies were sufficient to detect a statistically significant difference between the treatment and comparison groups, given the magnitude of symptom improvement experienced by the treatment group. However, one drug was approved for use in men even though the NDA reported that no men participated in the pivotal studies. We did not attempt to demonstrate statistically that the drugs in our sample were safe for women, because there are no absolute standards for the number of required study participants for assessing drug safety. Generally, the more patients that are exposed to a drug during its development, the more likely that significant adverse events will be detected. Safety determinations are largely based on adverse events reported for all participants in all studies. Since more women than men were included in clinical trials for the NDAs we examined, the adverse event data gathered for women were at least as extensive as the adverse event data gathered for men. A larger percentage of participants in clinical drug trials are women than we found in our 1992 analysis of trials performed between 1988 and 1992. Adjusting for differences in the classes of drugs included in the studies, we found that the percentage of women participants in small-scale efficacy and full-scale safety and efficacy trials increased from 44 percent in our 1992 study to 56 percent in the NDAs we examined. In the current study, summing across all the clinical trials for all of the NDAs we examined, 52 percent of the study participants were women, 39 percent were men, and 9 percent were not identified by sex. When participants’ sex was identified, women were the majority of participants for 58 percent of the NDAs. Women made up more than one-half of all the participants in small-scale efficacy and full-scale safety and efficacy trials. However, women were 22 percent of the participants in the initial, small-scale safety studies. One of the NDAs included no women in the early safety trials. These early safety studies are important because they measure how participants absorb, metabolize, and excrete a drug, and their findings are used to help set the dosage amounts for subsequent trials. NDAs usually contained sex-related analyses of safety and efficacy, regardless of whether the outcome data were presented in the summary documents as required by regulation (see table 1). Evidence of these analyses ranged from one-line summaries stating that there were no sex differences, to more complete, multi-page tables and descriptions of statistical methods and results. Specifically, most NDAs included analyses of safety and efficacy outcome data to detect differences between men and women in their responses to drugs. NDAs were less likely to include discussions of analyses of the safety and efficacy of drugs in women specifically by comparing women who received the drug and a comparison group of women. Fewer NDAs included analyses of pharmacokinetic data by sex, even though analysis of pharmacokinetic data is explicitly recommended in the 1993 guidance. We found that 42 percent of NDAs presented outcome data for these early studies for both men and women. Seventy-five percent of the NDAs we reviewed had some evidence of an analysis of pharmacokinetic data for sex differences. Many of the NDAs we reviewed reported differences in men and women’s responses to drugs, but fewer reported these differences to be statistically significant (see table 2). For example, while one-half of the NDAs reported drug safety differences between men and women, less than one-fifth of the NDAs reported statistically significant sex differences in drug safety. We found no evidence that any of the sex differences reported in any NDA on any dimension—safety, efficacy, or pharmacokinetics—even when statistically significant, were judged to be clinically relevant by either the NDA sponsors or the FDA reviewers, and no dose adjustments based on sex were recommended. Some NDA sponsors also reported differences in either safety or efficacy between women receiving the drug and women in a comparison group (see table 3). About one-fifth of the NDAs reported statistically significant differences in safety between women taking the drug and a comparison group, and about one-half found statistically significant differences in efficacy. Apparent sex differences in pharmacokinetics, and sometimes safety and efficacy, may be due to differences in weight between the sexes instead of other biological differences. At a constant dosage, individuals who weigh less have a higher exposure to the drug than heavier individuals, and, on average, women weigh less than men. The potential for higher drug concentration or exposure can lead to an increased risk of adverse events for women. In our sample of NDAs, 36 percent reported pharmacokinetic differences based on weight, whether or not sex differences were also reported. Twenty-five percent of NDAs reported apparent sex differences in drug response between men and women that were attributed to weight, not sex. In these cases, the sponsors reported sex differences in drug response but then noted that the differences disappeared when weight was taken into account. In all of these cases of weight-related differences in men and women’s responses to drugs, the sponsors asserted that no dose adjustments were necessary based on sex. For two intravenously administered drugs and one injectable drug the NDA did indicate dose adjustments based on weight for all patients. FDA has not effectively overseen the presentation and analysis of data related to sex differences in drug development. There is no management system in place to record and track the inclusion of women in clinical drug trials or to monitor compliance with relevant regulations, so FDA is unaware that many NDA submissions fail to meet requirements. The agency also does not routinely review the required tabulation of demographic data by sex in the IND annual reports for drugs in development. Finally, FDA’s medical officers have not been required to discuss sex differences in their reviews, and we found that their reviews frequently did not address the results of sex-related analyses conducted by NDA sponsors. Until recently, FDA has also lacked procedures to determine whether the reviews of its medical officers adequately discuss sex differences. We did not find, nor did we look for, any evidence that FDA’s reviews of the NDAs we examined had negative public health consequences. Such an examination was beyond the scope of this study. Recently, FDA has taken steps to pilot test several initiatives to address these management needs. FDA does not know how many women are included in clinical trials for each NDA or if NDA summary documents comply with the data presentation requirements of the 1998 regulation. There has been no systematic attempt by FDA to routinely collect and organize data on the inclusion of women in clinical trials. Although FDA officials told us that they believe that regulatory requirements are being met, FDA has no system in place to provide information that would support that assertion. The agency has not routinely tracked the required presentation of safety and efficacy data from women participating in clinical trials for the drugs it reviews. FDA does not routinely review the required presentation of data about the sex of study participants in the IND annual reports. As we noted earlier, 39 percent of the required IND annual reports did not include the tabulation of demographic information about study participants mandated by the 1998 regulation. We found no evidence that FDA follows up with sponsors that have not submitted annual reports—about 15 percent in our sample. A senior FDA official told us that the agency does not rely upon the information in these reports to monitor pre-NDA drug testing. According to this official, the agency instead uses other reports submitted by the sponsors for which there are no regulatory requirements to tabulate clinical trial participants by sex. FDA’s Medical Officer Reviews are important documents that detail FDA’s evaluation of the safety and efficacy of new drugs. We found that FDA’s medical officers have not been required to address sex differences in their reviews, and many of the medical officers’ reviews we examined did not address the sex-related data and analyses included in the NDAs (see table 4). For example, FDA’s medical officers did not discuss in their written reviews why reported differences between men and women in their responses to drugs did not require dose adjustments. In some cases, apparent contradictions in the NDAs about the role of sex or weight within the text of a drug application were not addressed. Since December 2000, FDA has pursued several initiatives that directly address areas of concern related to the review of sex differences. First, to help track the number of women in clinical trials and to monitor the compliance of NDAs with data reporting regulations, FDA began pilot testing a worksheet for reviewers to capture demographic information about the participants in large-scale efficacy trials. Instructions for the worksheet that will allow it to be used by all of FDA’s reviewers are being developed. Second, to help ensure that its medical officers address sex differences, FDA began pilot testing a standardized template for Medical Officer Reviews. The template instructs medical officers to discuss sex- related issues in a standard format in all of their reviews. Third, an electronic training package was recently implemented to provide information to FDA’s medical reviewers on the guidance and regulations applicable to the review of sex-related data and analyses included in NDAs. However, FDA does not require reviewers to use the training package. We found that women were a majority of the clinical trial participants in the NDAs we examined and that every NDA included enough women in the pivotal studies to be able to demonstrate statistically that the drug is effective in women. While these findings are welcome, we also found three areas of concern. The first is the relatively small proportion of women in early small-scale safety studies. These early studies provide important information on a drug’s toxicity and safe dosing levels for later stages of clinical development, and many of the NDAs we examined found significant sex differences in a drug’s pharmacokinetics, or how it is absorbed, distributed, metabolized, excreted, and concentrated in the bloodstream. Second, we are not confident that either NDA sponsors or FDA’s reviewers took full advantage of the available data to learn more about the effects of the drug in women and to explore potential sex differences in dosing. This is because NDA summary documents are not required to include analyses of sex differences, and some of them do not. Similarly, FDA’s medical officers have not been required to discuss sex differences in their reviews, and many of the reviews we examined did not include complete discussions of potential sex differences. Third, FDA does not now have appropriate management systems to monitor how many women are in clinical trials, to be assured that NDAs and IND annual reports are in compliance with pertinent regulations for presenting outcome data by sex and tabulating the number of women included in ongoing trials, or to confirm that its medical officers have adequately addressed sex-related issues in their reviews. While FDA has taken some promising initial steps to address these deficiencies, it is important that the agency finalize the pilot programs it has underway and give sustained attention to these management issues. We recommend that FDA adopt management tools that will ensure drug sponsors’ compliance with current regulations regarding the presentation of data by sex and that its reviewers’ consistently and systematically discuss sex differences in their written reviews of NDAs. Specifically, we recommend that the Acting Principal Deputy Commissioner of FDA: Promptly implement management tools, such as the proposed demographic worksheet and the standardized template for Medical Officer Reviews, that will allow the agency to determine whether NDAs and IND annual reports are in compliance with regulations that mandate the presentation of available safety and efficacy outcome data for women in NDAs and the tabulation of study participants by sex in IND annual reports. Fully implement the proposed template for Medical Officer Reviews or take other actions to ensure that FDA’s medical officers consistently and systematically consider and discuss sex differences in their written reviews of NDAs. We received written comments from FDA on a draft of this report (see appendix III). FDA generally agreed with our findings. FDA did not comment on our recommendations, but outlined additional steps it may take to monitor the inclusion of women in clinical trials. FDA questioned our description of comparisons between men and women, and comparisons between women taking the drug and a comparison group of women, as two distinct types of analyses. FDA pointed out that an analysis of sex differences implies that an analysis of the drug’s efficacy in women has been completed because an analysis of sex differences is a comparison of the drug’s efficacy in men and women. We have clarified the text, but we continue to present information about both analyses in order to accurately reflect the contents of the NDA summary documents we reviewed. Finally, FDA pointed out that its efforts to improve its management in this area have been underway for some time. In response, we modified our description of FDA’s activities. FDA also made additional technical comments that we have incorporated where appropriate. As we arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after its issue date. At that time, we will send copies of this report to the Acting Principal Deputy Commissioner of FDA and to others who request them. If you or your staff have any questions, please contact me at (202) 512- 7119. Another contact and major contributors to this report are listed in appendix IV. Our work addressed four questions: (1) what FDA regulations govern the inclusion of women in clinical drug trials; (2) are the regulations being followed; (3) are appropriate numbers of women included in clinical drug trials to ensure the safety and efficacy of drugs for women; and (4) how does FDA oversee the collection, presentation, and analysis of data related to sex differences? Our work did not include an examination of post marketing adverse events or negative public health consequences. To assess FDA’s oversight of the collection, presentation, and analysis of data related to sex, we reviewed the FDA Medical Officer Reviews for all sampled NDAs. We also interviewed officials in FDA’s Center for Drug Evaluation and Research, the Office of Special Health Issues, and the Office of Women’s Health. We also interviewed officials from drug companies and an industry trade association. To gain background knowledge on the issues related to our work, we spoke with women’s health advocates and consulted pharmacology experts. We conducted a literature review that included relevant FDA guidance and regulations, FDA and IOM reports, medical journal articles, prescription drug labels, and consumer advocacy publications. Because FDA maintains no central source of data on the inclusion of women in clinical drug trials, we sampled NDAs for new molecular entities (NME) submitted to FDA from August 10, 1998 through December 31, 2000. Of the 82 original NDAs for NMEs submitted to FDA during this period, we examined all 36 that were either approved for marketing or judged approvable by FDA by December 31, 2000, and that met our other selection criteria. We narrowed our focus to only approved and approvable NDAs because these drugs are the most likely to reach the public. We excluded diagnostic drugs used in medical imaging, drugs for sex-specific conditions, pediatric drugs, and drugs that were not approved for use in both men and women. We also did not examine biologic products, such as vaccines. As a result of our sampling criteria, the clinical drug trials for some drug classes that have been cited by experts as including insufficient numbers of women were not well represented. For example, our sample included only one cardiovascular drug. We requested from FDA and reviewed critical summary documents for each NDA, including the Integrated Summary of Safety, the Integrated Summary of Efficacy, the Pharmacokinetics and Bioavailability Summary, and the FDA Medical Officer Review. We obtained and reviewed other NDA documents only when the summary documents referred to relevant information. We reviewed the NDA summary documents because the 1998 regulations specifically require NDA sponsors to present data about drug safety and efficacy in the Integrated Summary of Safety and the Integrated Summary of Efficacy and because we were unable to review all of the documents in each NDA (an entire NDA can contain as many as 250 volumes). Our findings speak only to what was included in the summary documents or in the supplemental documents we examined; we did not systematically review other relevant data, such as data in clinical pharmacology reviews, that may have been presented in NDA volumes other than the critical summary documents. In our reviews of the critical summary documents we collected data on (1) the presentation of outcome data by sex, (2) the number of women participating in clinical drug trials by drug development stage, (3) the frequency and extent of sex-related analyses, (4) the detection of sex- related differences in drug response and their statistical significance, and (5) the relationship between body weight and sex-related differences. The decision rules we used to code the NDAs are presented in table 5. In general, we coded the information we sought as present if there was any mention of it in the summary documents. To determine if IND annual reports filed with FDA met the regulatory requirement for tabulating the sex of enrolled study participants, we randomly selected a sample of 100 IND applications that met our inclusion criteria from FDA’s November 2000 listing of active commercial IND applications. That listing included a total of 3,636 IND applications. According to FDA’s management information system, 15 of the IND applications in our sample had been withdrawn and were not active, and sponsors for 9 of the IND applications were not required to submit annual reports because they had not been active for a long enough period. We also found that FDA could not find one of the annual reports (see table 6). Because we randomly selected the IND annual reports we examined, our findings are generalizable to the entire set of IND annual reports. However, because of the small size of our sample, our estimate of the proportion of annual reports not fulfilling regulatory requirements is not precise. In our review of the remaining 75 IND annual reports, the reports were considered to have met regulatory requirements if the numbers of enrolled participants were reported by sex for at least one of the reported studies. The regulation requires “tabulation” of the data; for purposes of our review we considered any presentation of the demographic data to meet the IND regulatory requirements. We weighted the percentage of women by drug class to compare the percentage of women in clinical drug trials from our sample to that of our 1992 study. In weighting the percentage of women in our study by the percentage of participants in trials for each drug class used in the 1992 study, we were able to control for differences in the types of drugs sampled and compare the two studies as if our sample included the same drugs. For example, participants in cancer drug trials made up 7 percent of all participants in the 1992 study of clinical drug trials but only 5 percent of the participants in the small-scale efficacy and full-scale safety and efficacy clinical trials we examined in this study. By weighting our sample so that 7 percent of the study participants we found were in trials for cancer drugs, for example, we can fairly compare the percentages of women participating in clinical drug trials from our 1992 study to those from this study. In reviewing the 36 NDAs, we also collected information to determine whether enough women were tested in the clinical drug trials to detect sex differences. Standards for participation of women in clinical drug trials have included nominal thresholds for women’s participation (e.g., in our 1992 report, we regarded NDAs that tested 250 or more women as having enough women) and the representation of the sexes in numbers that are proportional to those in the population for whom a drug is intended. For this study, we adopted the perspective that the clinical trials should include a large enough number of women to demonstrate the safety and efficacy of the drug for women. To determine if enough women were tested in clinical drug trials to demonstrate the drugs’ efficacy in women, we generally conducted a power analysis using the number of participants in, and outcome data from, pivotal trials. NDAs that reported a statistically significant improvement in women taking the drug compared to women in a control group clearly had enough women in the pivotal trials to meet this criterion. For NDAs that did not report this analysis, we took the largest effect size presented in the Integrated Summary of Efficacy (that is, the largest percentage improvement for those taking the drug), the total number of women participating in the treatment group for all of the pivotal trials, and the total number of women participating in the comparison group for all of the pivotal trials. We then calculated the critical ratio, and significance level, for that effect size and that number of cases. We found that all of the NDAs we examined in this way had enough women in the pivotal trials to demonstrate that the drug had a statistically significant effect. We followed the convention that statistical tests with a probability level less than or equal to .05 are regarded as statistically significant. We conducted our work from July 2000 through May 2001 in accordance with generally accepted government auditing standards. We were able to estimate the number of men and women who participated in the clinical drug trials for the 36 NDAs in our sample by reviewing the NDA summary documents and FDA Medical Officer Reviews. Table 7 represents the estimated percentage of men and women who participated in the clinical drug trials by drug development stage. Table 8 represents the estimated number of men and women who participated in the pivotal clinical drug trials overall, and, where available, in the treatment and comparison groups of the pivotal trials. The data in both tables are grouped according to drug class. For some NDAs, the sex of some or all of the participants was not specified by clinical drug development stage or treatment group. Lisanne Bradley, Emily J. Rowe, Robert M. Copeland, Lawrence S. Solomon, Anh Bui, and Jenny C. Chen also made major contributions to this report. Drug Safety: Most Drugs Withdrawn in Recent Years Had Greater Health Risks for Women (GAO-01-286R, January 23, 2001). Women’s Health: NIH Has Increased Its Efforts to Include Women in Research (GAO/HEHS-00-96, May 2, 2000). Women’s Health: FDA Needs to Ensure More Study of Gender Differences in Prescription Drug Testing (GAO/HRD-93-17, October 29, 1993). National Institutes of Health: Problems in Implementing Policy on Women in Study Populations (GAO/T-HRD-90-50, July 24, 1990).
This report reviews the Food and Drug Administration's (FDA) inclusion of women in clinical drug trials. GAO found that women were a majority of the clinical trial participants in the new drug applications (NDA) it examined and that every NDA included enough women in the pivotal studies to be able to statistically demonstrate that the drug is effective in women. Although these findings are welcome, GAO also found three areas of concern. The first is the relatively small proportion of women in early small-scale safety studies. These early studies provide important information on drugs' toxicity and safe dosing levels for later stages of clinical development, and many of the NDAs GAO examined found significant sex differences in a drug's pharmacokinetics, or how it is absorbed, distributed, metabolized, excreted, and concentrated in the bloodstream. Second, GAO is not confident that either NDA sponsors or FDA's reviewers took full advantage of the available data to learn more about the effects of the drug in women and to explore potential sex differences in dosing. This is because NDA summary documents are not required to include analyses of sex differences, and many of them do not. Third, FDA lacks appropriate management systems to monitor how many women are in clinical trials, to be certain that NDAs and investigational new drug applications (IND) annual reports comply with regulations for presenting outcome data by sex and tabulating the number of women included in ongoing trials, and to confirm that its medical officers have adequately addressed sex-related issues in their reviews. Although FDA has taken some promising initial steps to address these deficiencies, it is important that the agency finalize the pilot programs it has underway and give sustained attention to these management issues.
To ensure that the Department of Defense (DOD) has an adequate number of military personnel in place to meet U.S. national security objectives, the services continuously conduct recruiting efforts. The four services have nearly 12,000 recruiters at 5,500 recruiting stations throughout the United States and overseas. Each of the services has its own process for selecting, training, and rewarding its recruiters. The Air Force is the only service with a recruiter force comprised entirely of volunteers. Recruiters are generally assigned monthly goals of the number of people to enlist to help meet their services’ annual recruiting goals. Recruiters are responsible for selling the benefits of military service to various audiences, including possible recruits, their parents, and teachers, and then prescreening applicants, according to established criteria, to determine whether the applicants should continue through the enlistment process. Those who pass the prescreening process are sent to 1 of 65 military entrance processing stations (MEPS) located throughout the United States. At a MEPS, applicants take a battery of tests and receive a medical examination to determine their eligibility for military service. Applicants who qualify for service sign their first contract, take their first enlistment oath as members of the Individual Ready Reserve, and enter the delayed entry program (DEP), in an unpaid status, for up to 1 year while awaiting assignment to basic training. While in the DEP, recruits have time to prepare mentally and physically for basic training. Recruiters are responsible for maintaining contact with recruits in the DEP and providing them with information and instruction that will help them successfully move from civilian to military life. Each service has its own basic training program, and the duration of the four programs ranges from 6 to 12 weeks. Before leaving for basic training, recruits return to the MEPS for final processing. At that time, the recruits undergo another medical examination, sign their second contract, and take their second enlistment oath as active duty servicemembers. After basic training, most recruits attend technical training for a few weeks to more than 1 year before reporting to their first assignment. Most initial enlistments last 4 years, including the time spent in training. The services recruit more than 167,000 men and women each year. Between fiscal years 1987 and 1996, DOD sent almost 2.2 million first-time recruits to basic training, which enabled all four services to meet or exceed their annual recruiting goals during that time. The Army enlisted about 38 percent of these recruits, the Navy 31 percent, the Air Force 16 percent, and the Marine Corps 15 percent. The Marine Corps replaces the greatest portion of its enlisted forces each year—typically close to 20 percent. The Air Force has the smallest yearly personnel changes; new recruits generally constitute less than 10 percent of its total enlisted force. While the number of new enlistees generally declined between 1987 and 1996 due to the drawdown of forces, the percentage of traditional high school diploma graduates remained fairly steady at about 94 percent. About one-third of the personnel recruited since fiscal year 1987, or more than 700,000 personnel, left military service after reporting to basic training but before completing their initial service obligations. Over this same period, approximately 9 percent, or about 200,000 personnel, left within the first 90 days of service. In addition, recent service data show that between 13 and 21 percent of recruits in the DEP dropped out of the military even before they left for basic training. These high attrition rates mean that recruiters must now enlist two people to fill one service obligation. Recruiting and retaining well-qualified military personnel is among the goals included in DOD’s strategic plan required under the Government Performance and Results Act of 1993 (P.L. 103-62, Aug. 3, 1993). The act was designed to create a new results-oriented federal management and decision-making approach that requires agencies to clearly define their missions, set goals, and link activities and resources to those goals. The act required that federal agencies’ strategic plans be developed no later than September 30, 1997, for at least a 5-year period. In response to concerns of the Chairman and former Ranking Minority Member, Subcommittee on Personnel, Senate Committee on Armed Services, about the cost of recruiting and training personnel who do not complete their initial military obligations, we reviewed the services’ recruiting efforts to (1) screen, select, and train recruiters; (2) screen, select, and prepare recruits for basic training; and (3) measure and reward recruiter performance. Specifically, we identified practices in each service that enhance recruiter performance and recruit retention and could be expanded to other services. We are also providing DOD and service information related to the costs of recruiting and training new servicemembers and our analysis of the difficulties associated with estimating the costs of attrition (see app. I). We limited the scope of our review to the role that recruiters might play in reducing attrition. We recognize that many other factors can contribute to attrition, such as medical, security, or other screenings performed by individuals or agencies outside the recruiting commands. However, we did not examine the adequacy of these factors. Also, we did not evaluate the role of basic training policies and personnel in reducing attrition. To address these objectives, we met with representatives from service recruiting commands, recruiter teams, and service recruiter schools. We also reviewed applicable instructions, regulations, policy statements, and recruiter school curriculums and observed 50 recruiter screening interviews. In addition, we discussed selection and training procedures with 35 experienced recruiters at various U.S. locations. We also spoke with the recruiters about the role they play in screening applicants for enlistment and preparing them for basic training. Finally, the recruiters provided us with their perspectives of the services’ recruiter award and incentive systems. The 35 recruiters did not constitute a representative sample of all recruiters, but they did provide broad perspectives based on more than 280 years of collective recruiting experience in 21 different states. To corroborate their statements, we compared the information they provided us with the results of DOD’s 1996 recruiter survey, which was based on a representative sample of recruiters. We also reviewed past accession and attrition studies done by audit agencies and private firms and collected and analyzed accession and attrition data from each of the services and the Defense Manpower Data Center to determine recruiting and retention trends. Although we did not extensively test the reliability of the Center’s data base, we did check computations of attrition percentages from accession and attrition statistics. We also compared Center data with information in the services’ databases. Because personnel numbers can change daily and the service data we used was not compiled on the same day as the Center’s data, we did not attempt to match these numbers. However, our data analysis revealed the same trends between service-generated data and Center data, and we did not find any large discrepancies between the databases. Finally, we discussed our data with recruiting command officials to ensure that no large discrepancies existed. We performed our work at the following locations: Directorate for Accession Policy, Office of the Assistant Secretary of Defense, Force Management Policy, Washington, D.C.; Army Recruiting Command, Fort Knox, Kentucky; Army Recruiting and Retention School, Fort Jackson, South Carolina; Air Force Recruiting Service, Randolph Air Force Base, San Antonio, Air Force Technical Training School, Recruiter Training Flight, Lackland Air Force Base, San Antonio, Texas; Navy Recruiting Command, Arlington, Virginia; Naval Recruiter School, Pensacola, Florida; Marine Corps Recruiting Command, Arlington, Virginia; and Marine Corps Recruiter School, Marine Corps Recruit Depot, San Diego, California. We conducted our review between January and December 1997 in accordance with generally accepted government auditing standards. The services use a variety of screening methods, such as reviewing annual performance appraisals and obtaining commanding officer recommendations, to ensure that personnel who are assigned to recruiting duty are chosen from among the best noncommissioned officers in their respective career fields. However, not all of these screening methods ensure that personnel selected for recruiting duty possess the communication and interpersonal skills necessary to be successful recruiters. The Air Force is the only service that critically evaluates communication skills as part of the recruiter screening process. It is also the only service that uses a personality assessment test during its recruiter screening. Personnel selected for recruiting duty in all of the services receive practical training in communication skills, sales techniques, and enlistment and paperwork requirements. This training supports a direct link between recruiter daily performance and DOD’s strategic goal of recruiting well-qualified military personnel. However, only the Marine Corps and the Navy recruiter schools have curriculums that are directly linked with DOD’s goal of retaining these personnel. Because recruiters represent the military services in civilian communities, they must meet high selection standards. These standards ensure that recruiters are selected from among the best noncommissioned officers in the military, but they do not necessarily identify those who possess or can develop the communication and interpersonal skills needed to become successful recruiters. Only the Air Force’s screening process critically evaluates servicemembers’ communication skills and uses assessment tests to predict the likelihood of their success as recruiters. Although actual screening standards vary by service, the recruiting commands generally use interviews and medical and personnel records to screen and select personnel for recruiting duty. The services generally draw their recruiters from noncommissioned officers in paygrades E-5 through E-7. During the screening process, the services use different but measurable criteria to evaluate a prospective recruiter’s education, health, moral character, emotional and financial stability, personal appearance, and job performance. Failure to meet any of these standards can disqualify a person from recruiting duty. The services also have minimum and maximum pay grade and time-in-service requirements, and those selected for recruiting duty are generally required to reenlist if they do not have at least 3 years remaining on their current enlistment. Finally, personnel with performance marks below a certain level are not eligible for recruiting duty. For example, Navy regulations disqualify any servicemember who has received an overall evaluation below 3.8 or individual marks below 3.6 (on a 4.0 rating scale) during the previous 3 years. Successful recruiters must be able to effectively communicate with a variety of people in the civilian community and convince them of the benefits of military service. These people include not only potential recruits but also parents, teachers, guidance counselors, coaches, school administrators, and others who may influence potential recruits. However, we found that only the Air Force’s screening process has measurable criteria to evaluate the communication and interpersonal skills of prospective recruiters. It is important to measure these skills because noncommissioned officers can excel in many military job specialties without possessing the ability to effectively interact with the general civilian population. The Air Force is the only service to require that recruiting command officials interview all prospective recruiters. Most Air Force interviews (about 70 percent) are conducted by a team of experienced recruiters who travel to U.S. and overseas bases. The team makes general presentations about recruiting duty and then conducts interviews with individuals who are interested in becoming recruiters. According to a team member, interviews generally last between 30 and 45 minutes, and spouses are required to be present. A prospective recruiter’s ability to communicate with the team is a key factor in determining whether the person will be selected. Prospective recruiters who lack communications skills can be rejected even if they meet all the pay grade, time-in-service, legal, financial, appearance, and performance requirements. The remaining interviews (30 percent) are for personnel who were not available or interested in recruiting at the time of the recruiter team’s visit. These candidates are interviewed by a high-level recruiting command official in their geographic area. The Marine Corps also has a recruiter screening team that travels to bases to present an overview of recruiting duty and interview people who have volunteered for recruiting duty or have been identified by the recruiting command as possible recruiters. However, a prospective recruiter’s ability to communicate with the screening team is not critically evaluated during these screening interviews, which typically last 5 to 10 minutes. Spouses are encouraged, but not required, to attend the interviews. Most Marines recruiters are screened by the team, but those who are unable to attend an interview with the screening team can be selected for recruiting duty based on a check of their records and an interview with their commanding officer. Marines who are selected for recruiting duty undergo a second, more in-depth screening interview when they arrive at the Marine Corps recruiter school in San Diego. The Army’s recruiter team interviews a much smaller percentage of the soldiers who have volunteered or are identified as prospective recruiters than the Air Force and the Marine Corps recruiter selection teams. Prospective Army recruiters can be interviewed by high-level officials within their chain of command who may, but most likely do not, have recruiting experience. These officials use a general checklist in deciding whether to recommend a person for recruiting duty. The checklist has measurable criteria for some items. For example, prospective recruiters must be a sergeant, a staff sergeant, or a sergeant first class and must have between 4 and 14 years time in service. They must also be high school graduates or have 1 year of college and a high school equivalency degree, and they cannot have been convicted of a crime by a civilian court or military court-martial. However, the checklist does not have any measurable standards regarding the prospective recruiters’ communication or interpersonal skills. Volunteers and other prospective Navy recruiters are interviewed by their commanding officers to determine whether they meet established standards. The commanding officers do not evaluate the prospective recruiters’ ability to communicate effectively in determining whether to endorse a person for recruiting duty. Navy officials told us that they think recruiting command personnel are in a better position to evaluate a person’s chances of being a successful recruiter. Therefore, the Navy is beginning to change its recruiter selection procedures to more closely resemble those of the Air Force. These officials said that the Navy hopes to have a traveling recruiter selection team in place in the near future. In its response to a draft of this report, DOD stated that the Navy has, in fact, assembled a recruiting team consisting of four career recruiters who will be augmented by field recruiters. In 1996, noting the results of studies of private salespeople, the Air Force began investigating the possibility of using a personality assessment test in screening potential recruiters. After administering a commercially developed biographical screening test to 1,171 recruiters, the Air Force found that recruiters with certain traits were much more likely to succeed than recruiters who lacked those traits. These traits, in order of importance, were assertiveness, empathy, self-regard (awareness of strengths and weaknesses), problem-solving ability, happiness and optimism, interpersonal relations, emotional self-awareness (ability to recognize one’s feelings), and reality testing (ability to distinguish between what you see and what is). The study also found that high performers worked the least number of hours and reported higher marital satisfaction and that neither the recruiter’s geographic region nor zone was a factor in predicting recruiter success. In August 1997, the Air Force purchased the 133-question biographical screening test for less than the cost of putting one recruiter in the field. In November 1997, the Air Force’s recruiter screening team began administering this test to prospective recruiters. All of the services use the armed services vocational aptitude battery of tests to measure servicemembers’ aptitude for initial job placement, yet none of the services uses this battery of tests to evaluate a person’s aptitude for recruiting. In its response to a draft of this report, DOD stated that the Navy is planning to test the use of an instrument that is similar to the Air Force test. Personnel selected for recruiting duty report to training sites where their suitability for recruiting duty continues to be evaluated. To become fully qualified, all recruiters undergo formal classroom training that lasts between 5 and 7 weeks and on-the-job training that can last up to 1 year. The Air Force and the Marine Corps are not only more selective than the other two services in the recruiters they send to school but also in the recruiters they allow to graduate from school. The Air Force recruiter school has an attrition rate of 17 percent, despite having all volunteer recruiters who have passed the most detailed pretraining screening process of the four services. Attrition rates at the Marine Corps recruiter school typically run between 14 and 16 percent. The Navy recruiter school has an attrition rate of approximately 6 percent, and the Army recruiter school attrition rate was under 5 percent during fiscal year 1997. Air Force recruiters are more than twice as productive as recruiters from the other services. On average, each Air Force recruiter sends at least 32 recruits to basic training each year, whereas recruiters for the other services send between 12 and 16 recruits to basic training annually. Officials from all the services acknowledged that part of this difference is due to the fact that the Air Force is “the service of choice,” receiving the most walk-in applicants and having the lowest turnover rate of the services. However, the Commanding General of the Air Force Recruiting Service attributes a large part of this success to the Air Force’s intensive recruiter screening process. Also, Air Force recruiters are the most successful in terms of meeting their assigned goals. Despite having the highest individual recruiting goals, DOD’s 1996 recruiter survey showed that 62 percent of Air Force recruiters reported making their assigned monthly goals 9 or more times during the previous year, compared with a DOD average of 42 percent. Lower turnover rates may also contribute to the success rate of Air Force recruiters. Air Force recruiters typically serve 4-year tours, whereas recruiters in the other services normally serve 3-year tours. Various studies have found that recruiter productivity increases after an initial learning period in the field, suggesting that the positive effects of experience can be realized as early as the 4-month point or as late as the 2-year point. Regardless of the length of the learning curve, the Air Force achieves some efficiency from the increased experience and lower turnover rates of its recruiters. The services’ recruiter schools support a direct link between recruiter daily performance and DOD’s strategic goal to recruit well-qualified military personnel. The curriculums consist of instruction, practical exercises, and examinations in communication and sales techniques as well as enlistment and paperwork requirements. However, only the Marine Corps recruiter school spends a significant amount of time teaching recruiters about preventing attrition, thus supporting DOD’s strategic goal to retain well-qualified personnel. The Marine Corps recruiter school, located at the Marine Corps Recruit Depot in San Diego, supports DOD’s strategic retention goal by teaching recruiters that they have an important role in reducing attrition that occurs before the end of the first enlistment contract. Communication and leadership are viewed as the keys to reducing attrition. The curriculum devotes more than a full week, out of 7, to these issues: 2-1/2 days to communication and basic training issues and 3-1/2 days to leadership training. Students at recruiter training discuss attrition issues with basic training drill instructors, recruits who are separating from basic training, and recruits who are being held back in basic training because they cannot meet the physical fitness requirements. Marine Corps officials believe this interaction with drill instructors helps to open the lines of communication between drill instructors and recruiters after the recruiters graduate. The interaction with recruits helps the recruiters to realize that they not only need to recruit people but that they also need to prepare them for basic training and maintain contact with them while they are at basic training. A large portion of the Marine Corps school’s leadership training focuses on the effect that DEP leadership can have on reducing attrition. One lesson begins with a classroom demonstration in which all of the students are initially standing. Then, about 19 percent of the students are told to sit down to represent DEP discharges. Next, another 12 percent are instructed to sit down to represent basic training attrition. Finally, another 25 percent of the class is told to sit down to represent the rest of the first-term attrition. This lesson vividly illustrates to the students that less than one of every two recruits actually completes the first full term of obligated service. Afterward, the instructor explains that recruiters have to make up every one of the discharges and emphasizes the four goals of the Marine Corps’ national DEP: to reduce DEP attrition, reduce basic training attrition, positively impact other first-term attrition, and deliver better motivated Marines to the Fleet Marine Force. Marine Corps recruiters are taught that they must sell their enlistees on the features and benefits of DEP, just as they sold them initially on the Marine Corps. All Marine Corps recruiters are required to write to their recruits and the recruits’ families while the recruits are in basic training. One Marine Corps recruiter told us that he was required to send three letters to each recruit in basic training and that none of the letters was allowed to be a form letter. According to Marine Corps recruiters, drill instructors often call recruiters to warn them if one of their recruits is having trouble at basic training. To prevent attrition, the recruiters can then talk to their recruits on the telephone and remind them of the reasons that they joined the Marine Corps. The recruiters said basic training attrition would probably be much higher if they were not given early warnings of trouble and allowed to resell their recruits on the benefits of serving in the Marine Corps. According to a Marine Corps document, the percentage of recruit training graduates is indicative of the efforts that have taken place from contract to accession. It demonstrates quality prospecting and screening, sound sales practices, and an effective DEP. Although the Army’s recruiter school is located at Fort Jackson, South Carolina, which is also the site of one of its basic training programs, the curriculum does not include any interaction between future recruiters and recruits or drill instructors at basic training. Likewise, the Air Force’s recruiter school is colocated with its basic training squadrons at Lackland Air Force Base in San Antonio, Texas. However, the curriculum does not include discussions between the students and drill instructors or new recruits, except during a 1-hour tour of the basic training facilities. Students at the Navy’s recruiter school do not have any interaction with drill instructors or recruits because the recruiter school is located in Florida and the basic training site is in Illinois. However, the Navy recently began a 4-day refresher training course for its recruiters who have been in the field between 12 and 18 months. The refresher course is held at the basic training site in Illinois, and recruiters spend about one-half of their time observing and interacting with recruits and their families, drill instructors, and other training command personnel at basic training and graduation events. After all current recruiters have attended this training, the Navy plans to send new recruiters to the training after they have been in the field about 6 to 8 months. The Army, the Navy, and the Air Force do not have separate leadership modules in their recruiter school curriculums. Although they all include instruction in DEP management as part of their recruiter curriculums, this training is less extensive than the Marine Corps’ leadership training, lasting only 3 to 9 hours. In addition, these services do not emphasize the relationship between effective DEP management and DOD’s strategic retention goal. Army, Navy, and Air Force recruiters we spoke with said that drill instructors hardly ever call them to give an early warning that a recruit is having difficulties at basic training. The recruiters said they usually learn that a recruit is having problems only through the recruit’s family or when they see the recruit back in town after dropping out of basic training. In addition, some Air Force recruiters told us that they were prohibited from writing letters to recruits in basic training due to concerns that some recruits would receive more mail from their recruiters than others. By carefully selecting recruiters based on a demonstrated aptitude for recruiting, as well as excellent performance in another military specialty, the services should be able to increase the effectiveness of their recruiters. In addition, by training these recruiters to lead and motivate recruits in the DEP and requiring the recruiters to keep in touch with their recruits at basic training, the services could help to increase retention and the efficiency of their recruiting commands. For the services to meet DOD’s strategic goal of recruiting and retaining well-qualified military personnel, optimize recruiting command efficiency by identifying personnel who are likely to succeed as recruiters, and increase recruits’ chances of graduating from basic training, we recommend that the Secretary of Defense instruct the services to use experienced field recruiters to personally interview all prospective recruiters and evaluate their potential to effectively communicate with applicants, parents, teachers, and others in the civilian community; jointly explore the feasibility of developing or procuring assessment tests that can aid in the selection of recruiters; and instruct officials at the service recruiting schools to emphasize the retention portion of DOD’s long-term strategic goal by having drill instructors meet with students at the schools and having the recruiters in training meet with separating recruits and those being held back due to poor physical conditioning. These practices could establish an ongoing dialogue between recruiters and drill instructors and enhance understanding of problems that lead to early attrition. DOD partially concurred with our recommendation to use experienced field recruiters to interview all prospective recruiters. In its response, DOD agreed that the selection and training of the recruiter force is of vital importance and that our recommendation to use experienced recruiters to personally interview prospective recruiters is valid, where possible. However, DOD also stated that this recommendation is not economically feasible in the Army due to the large number of men and women who are selected annually for recruiting duty and to the geographic diversity of their assignments. While it may be difficult for the Army to use field recruiters to interview 100 percent of its prospective recruiters, we continue to believe that senior, experienced recruiters have a better understanding than operational commanders about what is required in recruiting duty. Therefore, we encourage the Army to place a greater emphasis on the use of recruiter selection teams or explore other alternatives that would produce similar results. In the case of the Marine Corps, DOD did not present any reasons to suggest that this service could not implement this recommendation. Instead, DOD referred to the additional screening that the Marine Corps conducts at its recruiter school and the Marine Corps’ belief that it does not place any recruiters on the street who are not properly screened. We discussed this additional screening and cited the relatively high attrition rate that this school experiences. However, we also presented some limitations in the Marine Corps’ current screening process and believe, therefore, that this service would also benefit from this recommendation. As previously stated in this report and in DOD’s comments, the Air Force already relies on recruiters, and the Navy is changing its recruiter selection procedures to more closely resemble those of the Air Force. DOD concurred with our recommendation to jointly explore the feasibility of developing or procuring assessment tests that can aid in the selection of recruiters. In its response, DOD said that the Office of the Assistant Secretary of Defense for Force Management Policy will work with the services to evaluate various assessment tests. DOD also concurred with our recommendation to establish better communication between the recruiting force and basic training drill instructors, adding that this recommendation is sound and viable. In its response, DOD stated that the Army is reviewing the recruiter school curriculum and will establish a linkage between the recruiter school and the recruiter liaison at the basic training site at Fort Jackson and that the Air Force has incorporated an in-depth tour of basic training into its recruiting school’s curriculum. DOD also cited the Navy’s refresher training for new recruiters, where recruiters have the opportunity to meet and interview recruits during the last week of basic training. Recruiters use standard criteria in screening applicants for military service, but physical fitness is not among the criteria. Thus, the services have no assurance that recruits will be able to pass their physical fitness tests in basic training. To help prepare recruits for basic training and reduce early attrition, the services are now encouraging recruits to maintain or improve their physical fitness while in the DEP. However, only the Marines Corps conducts regular physical fitness training for its recruits in the DEP and requires them to take a physical fitness test before reporting to basic training. The Marine Corps has found that attrition is lower among those who pass this test. Recruiters are only one part of the enlistment process. They play an important role in the process by applying criteria established by Congress, DOD, and the individual services during initial screening interviews to identify applicants who are preliminarily qualified for enlistment. However, physical fitness is not among the criteria. Also, recruits may request a waiver if they do not meet one or more of the established criteria. Service personnel in several different organizations play a role in screening and selecting candidates for military service. The accuracy and thoroughness of the recruiter in screening for established criteria during the initial interview are critical to the efficiency of the entire recruit selection process. Failure to screen for all of the established criteria can allow unqualified candidates to continue needlessly through the selection process, wasting time and money on applicants who will likely be disqualified during further enlistment processing at a MEPS or discharged from service. The head of one service’s recruiting command told us that recruiters should be selective in their initial screenings and that it is appropriate for them to use their judgment in addition to the established criteria. However, most recruiters we spoke with said that they do not screen out individuals who meet the established screening criteria. The recruiters also explained that they generally did not want to pass judgment on an applicant’s suitability for service because some prior assessments had proven to be wrong. In addition, the recruiters were concerned that they could receive congressional inquiries if individuals who met the eligibility criteria were not selected for service. Congress and DOD have set minimum standards for two of the primary screening criteria—possession of a high school diploma and score on the Armed Forces Qualification Test. DOD guidelines state that a minimum of 90 percent of recruits who have not previously served in the military need high school diplomas. The guidelines also state that at least 60 percent of first-time recruits need to score in the top three of six mental categories on the qualification test. Further, Congress has prohibited the selection of recruits from the bottom test category and limited the number of recruits who can score in the next lowest category. DOD and service enlistment standards establish additional criteria that potential recruits must meet. These criteria, which can vary by service, include age, citizenship, weight, number of dependents, health, prior drug or alcohol abuse, and law violations. Potential recruits also receive a medical examination to determine a certain level of wellness. However, actual physical fitness is not included as a criterion, even though service officials acknowledge that poor physical conditioning among recruits is often a contributing factor in early attrition. As a result, the services spend thousands of dollars training recruits without any assurance that they will be capable of passing their physical fitness tests. Recruits who cannot pass service physical fitness tests face discharge. Most of the applicants who are enlisted meet all of the services’ enlistment criteria. However, those applicants who do not meet one or more of these criteria can continue to pursue entrance into the military by requesting a waiver for each criterion not met. Recruiters are not required to encourage unqualified prospects to apply for a waiver. Nevertheless, when applicants wish to pursue a waiver, recruiters do not have the authority to disapprove this request and must forward the waiver through their chains of command. Generally, the farther an applicant is from meeting an established standard, the higher the waiver approval authority. For example, an Army applicant convicted of driving under the influence could apply for a waiver from a recruiting battalion commander. However, a waiver request for two incidents of driving under the influence would need to be considered by the Commanding General of the Army Recruiting Command. The burden is on applicants to prove to the waiver authorities that they have overcome any disqualifying condition. To enhance recruit retention levels, the services are improving their DEPs. The services now encourage recruits to maintain or improve their physical fitness level so that they will be able to meet the initial physical conditioning requirements of basic training. However, only the Marine Corps conducts regular physical fitness training for its recruits and requires them to take a physical fitness test while in the DEP. The Marine Corps reports that attrition is lower among recruits who passed the test. One of the purposes of the DEP is to obtain a recruit’s commitment to serve. The services have recently attempted to strengthen the commitment of recruits in the DEP by providing them with better information, training, and benefits. The services believe that individuals with a strong commitment to serve are less likely to drop out of the DEP or leave military service before the end of their first enlistment period. The Navy and the Marine Corps recognize the positive effect the DEP can have on retention rates and have established a minimum and optimum time, respectively, that their recruits should spend in the DEP. Overall DOD attrition statistics for fiscal years 1987 through 1994 showed that recruits who spent at least 3 months in the DEP had lower attrition rates than those who spent less time. This correlation was much stronger for the Marine Corps and the Navy than it was for the Army and the Air Force. DEP programs vary by service, but all require their recruiters and recruits to be in regular contact with each other. Army, Navy, and Air Force recruiters are responsible for contacting their recruits on a regular basis. The Marine Corps, on the other hand, tries to instill responsibility in its recruits by requiring them to contact their recruiters each week. Participation in DEP activities is voluntary, but all of the services strongly encourage recruits to attend monthly DEP meetings to help them prepare for basic training. Some services also give recruits basic training material to study before basic training begins. In addition, Army recruits have the opportunity to earn points toward future promotions by working on correspondence courses while in the DEP. All of the services are also encouraging recruits to maintain or improve their level of physical fitness while in the DEP. For example, recruits now have access to their service’s physical fitness centers. However, only the Marine Corps conducts regular physical training for DEP members and requires all recruits to take a physical fitness test before leaving for basic training. Other services only require recruits in a few selected career fields to take physical fitness tests before basic training. Army and Air Force officials have expressed concerns about service liability for injuries that recruits could sustain during DEP physical training. The Navy addressed this concern by giving recruits access to medical facilities if they suffer DEP-related injuries. Marine Corps officials said that there have been minor injuries during DEP physical training but that none of these injuries have resulted in a serious claim against the government. The Marine Corps generally holds its DEP recruits to higher standards than the other services. These recruits are told that they must earn their way to basic training by preparing mentally, psychologically, and physically. The Commander of the Marine Corps Recruiting Command stated that failure to participate in DEP training programs is evidence of a lack of desire and motivation to become a Marine and could result in discharge. The Marine Corps implemented changes to its DEP in May 1994, and physical training is a key component of this program. Recruiters are encouraged to give recruits an initial physical fitness test within their first 30 days in the DEP, but a test must be given within 30 days of the date that the recruit is to leave for basic training. Recruiters also encourage recruits to exceed the test’s minimum requirement before leaving for basic training. According to the Commander of the Marine Corps Recruiting Command, recruits who cannot accomplish the minimum standard in the physical fitness test experience significantly higher attrition rates and are much more at risk of injury than those who can pass the test. Marine Corps attrition statistics also show a strong correlation between performance on the test and attrition rates. A study of almost 14,500 male Marines who attended basic training in fiscal year 1994 found that recruits who failed the initial physical fitness test had an attrition rate of 24.1 percent, whereas those who passed had an attrition rate of 13.4 percent. In addition, attrition rates were only about 11 percent for recruits who far exceeded the minimum test requirements by doing 10 or more pull-ups or running a 1-1/2 miles in less than 12 minutes. Statistics also show that recent Marine Corps efforts to reduce attrition, including the changes to its DEP in May 1994, are working. Twelve-month attrition rates across DOD rose from 15 percent in fiscal year 1990 to 19 percent in fiscal year 1995. However, while Army, Navy, and Air Force attrition rates were increasing by 4 to 6 percent over this time period, Marine Corps attrition rates declined by 4 percent. Recruiters have many tools at their disposal to help them screen candidates for military service. However, while education requirements provide some assurance that recruits will be capable of learning the academic material that will help them become productive servicemembers, and physical examinations provide some assurance that recruits have a minimum level of wellness, the absence of physical fitness screening requirements prevents the services from having any assurance that their recruits will be able to pass their physical fitness tests. Since all servicemembers are required to pass physical fitness tests, the services may be investing thousands of dollars training an individual who will eventually face discharge. The Army, the Navy, and the Air Force may be able to improve their attrition rates by running stronger DEP programs. The Marine Corps emphasizes physical fitness training in its DEP program and administers a physical fitness test to its recruits at least 30 days before they report to basic training. Recent statistics show a strong correlation between performance on this test and attrition rates. Recruits who attained higher scores on the test experienced lower attrition rates than those who either attained lower scores or failed the test. Although it may be more difficult for recruiters with large geographic areas to conduct regular physical training with members of their DEP, most recruiters should not have this problem. However, even recruiters with large areas should be able to follow the Marine Corps’ practice of giving all recruits a physical fitness test before basic training. To maintain recruit quality and increase a recruit’s chances of graduating from basic training, we recommend that the Secretary of Defense instruct the Army, the Navy, and the Air Force to implement the Marine Corps’ practice of administering a physical fitness test to recruits before they report to basic training. In addition, we recommend that the Secretary encourage the services to incorporate more structured physical fitness training into their DEP program. In commenting on a draft of this report, DOD concurred with our recommendation regarding administering a physical fitness test to recruits before they report to basic training and encouraging the services to incorporate more structured physical fitness training into their DEP programs (see app. III). DOD stated that, in an attempt to reduce basic training attrition, the Army, the Navy, and the Air Force are now taking steps similar to the Marine Corps to better prepare recruits in the DEP for the physical rigors of basic training. Furthermore, DOD stated that the Office of the Secretary of Defense for Force Management Policy will investigate the legal status of DEP members and the limits of their medical entitlements while they are in the DEP. All of the services reward recruiter success. However, many existing awards and incentives are based on output measures that do not reflect DOD’s long-term retention goal to retain quality personnel. Only the Marine Corps and the Navy use basic training graduation rates as criteria in evaluating recruiters for awards, thus linking DOD’s strategic goals to their recruiters’ daily operations. According to DOD recruiter satisfaction surveys, recruiter job performance has been declining since 1991, and is the lowest it has been since recruiter surveys were first administered in 1989. In 1996, 58 percent of the services’ recruiters said they had missed their monthly goals 3 or more times during the previous 12 months. Recruiters also said that they are under constant pressure to make their assigned goals and that their working hours are increasing. DOD’s 1996 recruiter survey showed that 54 percent of recruiters were dissatisfied or very dissatisfied with recruiting, compared with 47 percent in DOD’s 1994 survey and 35 percent in the 1991 survey. The results of DOD’s recruiter surveys and our interviews with experienced recruiters show that current award and incentive systems have not effectively dealt with recruiters’ two biggest concerns—their monthly goals and working hours. Incentive and award systems based on recruit graduation rates from basic training would provide the services with a required link between DOD’s long-term strategic goals to recruit and retain well-qualified military personnel and daily recruiter operations. However, only the Marine Corps and the Navy use recruits’ basic training graduation rates as key criteria when evaluating recruiters for awards. The Army and the Air Force measure recruiter performance primarily by the number of recruits who enlist or the number who report to basic training rather than the number who graduate and become productive servicemembers. Award and incentive systems have differed significantly by service and within services over time, but they are usually based on point systems that take into account the quality of recruits enlisted, the positions the recruits fill, and the recruiter’s success in making his or her goal. At various times, the services have used individual, team, and combination awards, and they have based these awards on both absolute and relative performance. Despite numerous studies on recruiter award and incentive systems, all of the services have been unable to settle on an optimal system. Also, the services have, at times, altered their recruiter incentive systems in opposite directions: as one service moved from individual to team awards, another de-emphasized team awards and moved toward greater reliance on individual awards. Current recruiter awards vary from badges and plaques to meritorious promotions. The Marine Corps is the only service that has consistently used attrition data as an important criterion in determining awards for its top performers. For example, the Commandant of the Marine Corps gives out two top achievement awards annually, one for the top recruiter and one for the top noncommissioned officer in charge of a recruiting substation. The recruiters nominated for these awards must meet numerical and quality accession goals and have DEP attrition rates below 20 percent and basic training attrition rates below 13 percent. Between 1993 and 1996, Marine Corps basic training attrition remained relatively stable between 12.7 and 13.5 percent. Therefore, recruiters nominated for the Commandant’s awards had to ensure that their recruits’ basic training attrition rates were at or below average attrition rates. Marine Corps recruiting awards presented at lower levels also take attrition rates into account. The Navy has numerous awards for its top recruiters and recruiting stations but, unlike the Marine Corps, bases these awards on a competitive point system. Since fiscal year 1996, this point system has undergone several changes that were designed to give greater weight to recruits who completed part or all of basic training. The Navy awards recruiters points when one of their recruits enlists at a MEPS. The number of points awarded is based on Navy needs and can vary throughout the year. Recruits with high school diplomas and good enlistment test scores who enlist into difficult fields, such as nuclear power, generally earn recruiters high point levels. Conversely, recruits without diplomas or with low test scores usually yield recruiters fewer points. Recruiters can also earn points when their recruits help the Navy to meet its racial, ethnic, or gender goals. In fiscal year 1998, Navy recruiters will be awarded an additional set of points, worth four times the original point value, when a recruit leaves for basic training, thus giving recruiters a strong incentive to monitor and mentor their recruits in the DEP. When recruits graduate from basic training, the Navy will award their recruiters with additional points worth 5 times the recruit’s original point value, for a total of 10 times the original point value. The additional points give recruiters a strong incentive to ensure that recruits are motivated and prepared to succeed at basic training. To be competitive, a recruiter who can sell applicants on enlisting but cannot motivate them to go to basic training would have to enlist 10 applicants just to keep pace with the recruiter who enlists and motivates 1 recruit who graduates from basic training. Army and Air Force awards are generally based on the number and quality of initial contracts and accessions in relation to assigned recruiting goals. These services do not reward recruiters based on the number of recruits who graduate and go on to become productive soldiers or airmen. The Army and the Air Force, which bring in almost 55 percent of DOD’s new recruits, see clear lines of separation between the recruiting and training processes, and believe it is inappropriate to hold recruiters accountable for recruits who fail to complete basic training. Although the Army and the Air Force do not use basic training graduation rates as key criteria when selecting award recipients, they can exclude recruiters from awards if their attrition statistics are extremely high. For example, Air Force senior and master recruiter badges are earned primarily on the basis of production, but recruiters are not eligible for the badges if the basic training attrition rate for their recruits is above 15 percent. Between fiscal year 1993 and 1996, overall Air Force basic training attrition rates varied between 8.7 and 11.1 percent. Therefore, a recruiter’s the basic training attrition rate had to be 35 to 72 percent above the Air Force average before he or she was prevented from earning a senior or master recruiter badge. In effect, all of the services hold their recruiters indirectly accountable for early attrition through higher goals, even if their awards systems do not reflect this. The number of recruits that is needed in a given year is determined based on projected end strengths, historical loss rates, and the mix of contract lengths for current servicemembers. In setting goals for their recruiters, the services recognize two different types of attrition. The first is DEP attrition, which occurs between the time an applicant first signs an enlistment contract at a MEPS and the date the recruit leaves for basic training. The second is active duty attrition, which occurs any time after a servicemember reports to basic training. Recruits in the DEP are allowed to quit for any reason. Enlistment contracts are simply canceled for those who quit, with no permanent adverse effect on the recruits. However, with the exception of the Navy, recruiters are held individually responsible for DEP attrition, and their current month’s goal is raised each time one of their recruits drops out of the DEP. Recruiters are not held individually responsible for active duty attrition. However, the services use active duty attrition rates, which have remained fairly steady at about one-third of accessions, to compute annual goals for the service recruiting commands. Application of this attrition rate causes recruiting command goals to be much higher than they would be if attrition did not exist or was much lower. Since recruiting command headquarters personnel do not actually recruit, increased recruiting goals are passed down through the chain of command and eventually result in increased goals for individual recruiters in the field. Therefore, although some services claim that recruiters cannot affect attrition and should not be held accountable for it, all of the services are, in fact, currently holding their recruiters accountable for attrition. Recruiter performance is primarily measured against and driven by monthly contracting and accession goals. Additional performance measures have changed over the years, but monthly contracting and accession numbers have remained largely unchanged as the primary performance measures. Recruiters said that they are under pressure to make their goal beginning on the first day of every month, and the pressure often does not let up when they make their monthly goal. Recruiters told us that, once they make their own monthly goal, they are often pressured to recruit one more person to cover for other recruiters who do not make their goal. Table 4.1 shows the number of recruits the average production recruiter needed to recruit for the services to achieve their 1997 accession goals. Recruiter monthly goals vary from one to four or more recruits. However, since all of the services need their production recruiters to achieve more than one accession per month to make their service’s accession goal, most recruiters are assigned a minimum goal of two recruits per month. Many Air Force recruiters have goals of three accessions per month because of that service’s higher requirements per recruiter. Recruiter responses in DOD’s 1996 recruiter satisfaction survey showed that recruiter job performance was at an all-time low. Despite the successes of the service recruiting commands, only 42 percent of the recruiters who responded to DOD’s survey said that they had made their goal 9 or more months out of the previous 12. This figure represented a decrease of 8 percent from DOD’s 1994 survey and the lowest level since DOD began its recruiter surveys in 1989. In addition, 28 percent of the respondents said that their monthly goals were unachievable. At the same time that recruiters’ job performance has been dropping, their working hours have been increasing. In DOD’s 1996 recruiter survey, 63 percent of recruiters said they worked 60 or more hours per week. These results show that the percentage of recruiters working long hours is at the highest level since recruiter surveys were first taken in 1989. In addition, only 23 percent of the services’ recruiters said they would remain in recruiting if given the chance to be reassigned to another job. During our review, we spoke to 35 recruiters who had a total of over 280 years of recruiting experience. Many of these recruiters corroborated the results of the 1996 recruiter survey. They said that working hours in many places are getting worse and that recruiters everywhere experience tremendous pressures to meet their monthly goal. Recruiters who do not make their goal are often put on extended working hours until the goal is achieved, and successful recruiters who exceed their goal are often required to work longer hours to make up for those who do not make their goal. All of the 35 experienced recruiters we spoke with said that time off is an important incentive for motivating recruiters. In fact, most of the recruiters said it is the biggest incentive a production recruiter ever receives. This sentiment was repeated even among those recruiters who had been meritoriously promoted as a reward for their recruiting excellence. Senior enlisted officials in the Marine Corps told us that the commanding officer of the Marine Corps Recruiting Command had given recruiters 4 days off after the Command made its 24th consecutive monthly goal. However, according to these officials, many supervisors did not give their recruiters the time off and never even informed them that they were supposed to get the time off. Command-level officials in all of the services encourage recruiters to take leave. However, the same encouragement does not always flow down the chain of command to production recruiters. In the Air Force, recruiters who take 2 weeks of leave in 1 month will not be assigned a goal for that month. Army recruiters are encouraged to take 1 week of leave per quarter. According to senior Marine Corps Recruiting Command officials, the commanding officer of the Marine Corps Recruiting Command personally monitors recruiter leave balances to ensure that recruiters are not denied the opportunity to take leave. Despite all these efforts, 68 percent of the recruiters who responded to DOD’s 1996 survey said the demands of the job had prevented them from taking leave during the previous 12 months. This figure represented almost a 50-percent increase from the level in the 1994 survey and the highest level since the first DOD recruiter survey in 1989. We spoke with several recruiters who were called in off leave or who came to work during their leave. With regard to the problem of taking leave, some recruiters suggested that the services should close all recruiting and MEPS stations during the week between Christmas and New Year’s Day and require recruiters to take leave during that typically unproductive time period. The recruiters said this action is the only way to guarantee that production recruiters will actually get time off to use their leave. Under the current monthly goal system, recruiters cannot work ahead and sign extra recruits in one month so they can ease up and take some leave the next month. Recruiters who make double their monthly goal are usually assigned the same or higher goals for the next month. In addition, recruiters who have a bad month face concerns about how they will be rated after missing one or more monthly goals, even when they meet or exceed their annual goals. A senior official at the Air Force Recruiting Service suggested that quarterly floating goals could overcome recruiter concerns about monthly goals and still provide the services with a steady flow of recruits to fill training slots. Under a quarterly floating goal system, recruiters would still be assigned monthly goals, and their performance would still be evaluated on a monthly basis. However, each month the current month’s goal would be added to the goals of the previous 2 months and compared to the recruiter’s performance during that 3-month period, rather than comparing the current month’s performance to the current month’s goal. Recruiters who make their goals every month under the current system would be unaffected by changing to quarterly floating goals. They would still be considered successful. Recruiters who never make their monthly goals would also be unaffected by a change to quarterly floating goals. However, quarterly floating goals could benefit recruiters who make their annual goals but underproduce in some months and overproduce in others. Appendix II contains additional information about quarterly floating goals, including examples of how these goals could help individual recruiters without jeopardizing the services’ ability to make their command goals. Recruiters can be motivated to support DOD’s long-term strategic goals, but they must view their award systems as fair and reasonable and closely linked to those strategic goals. The Marine Corps and the Navy have tied many of their awards and incentives to basic training graduation rates, establishing a link between recruiter performance and DOD’s strategic retention goal. Marine Corps and Navy recruiters thus understand that they bear some of the responsibility for basic training attrition. The Army and the Air Force award systems place very little weight on recruit performance at basic training and base awards primarily on the number of recruits a recruiter enlists or sends to basic training. Under Army and Air Force award systems that do not tie awards to retention, recruiters may mistakenly believe that they have no responsibility for basic training attrition. However, because these services need to replace the people who drop out of basic training, recruiters are given monthly goals that are higher than they would be if attrition did not occur. Thus, recruiters are responsible for making up for basic training attrition. The results of DOD’s most recent recruiter survey demonstrate a fairly high level of dissatisfaction among recruiters over the current system of monthly goals and the long hours that they must work to achieve the goals. This dissatisfaction may create morale problems that adversely affect productivity. These conditions might also discourage others from volunteering for recruiting duty. Changing the monthly goal system to a floating quarterly goal system could relieve some pressure from recruiters and enhance their working conditions without sacrificing overall recruiting goals. Better morale and working conditions could encourage additional candidates to volunteer for recruiting duty. In our January 1997 report on military attrition, we recommended that the services link recruiting quotas more closely to recruits’ successful completion of basic training. We also suggested consideration of a quarterly floating goal system. In a March 1997 memorandum directing the services to act on our report, DOD deferred taking a position on those issues pending recommendations from this follow-up review. This report expands upon our earlier work and provides a detailed example of how a floating goal system might operate. To enhance recruiter success and help recruiters focus on DOD’s strategic retention goal, we recommend that the Secretary of Defense instruct the services to link recruiter awards more closely to recruits’ successful completion of basic training. To enhance recruiters’ working conditions and the services’ ability to attract qualified candidates for recruiting duty, we also recommend that the Secretary of Defense encourage the use of quarterly floating goals as an alternative to the services’ current systems of monthly goals. DOD concurred with our recommendation that the services link recruiter awards more closely to recruits’ successful completion of basic training, stating that the Assistant Secretary of Defense for Force Management Policy will ensure that all the services incorporate recruit success in basic training into their recruiter incentive systems. DOD partially concurred with our recommendation that the Secretary of Defense encourage the use of quarterly floating goals as an alternative to the services’ current systems of monthly goals. DOD’s primary concern with this recommendation is that floating quarterly goals would reduce the services’ ability to make corrections to recruiting difficulties before they become unmanageable. DOD also stated that the Air Force had tried floating goals, and that its experience indicated that such a system leads to a lessened sense of urgency early in the goaling cycle and more pressure later in the cycle. In a follow-on discussion with a senior official at the Air Force Recruiting Service, we learned that the Air Force did experiment with a quarterly system in its northeast region from October to December 1991. However, the Air Force canceled this experiment in January 1992 when it discovered that many recruiters had fallen behind in their goals for that 3-month period. We do not believe that the Air Force’s experience truly measured the potential merits of a quarterly floating goal system since the Air Force canceled this program after only 3 months. While we agree that recruiting commands must maintain the ability to control the flow of new recruits into the system on a monthly basis, it should be noted that this proposal is for floating, rather than static, quarterly goals. As a result, recruiting shortfalls would still be identified in the first month that they occur and not disrupt the flow of recruits to training. Accordingly, we believe that a longer test period than 3 months would be needed to fully test this concept. Moreover, DOD recruiter surveys show that recruiter performance is at an all-time low and that the percentage of recruiters working long hours is the highest it has ever been since the surveys were first taken in 1989. We believe this matter warrants serious attention and that these problems will continue if changes are not made. The quarterly floating goal proposal would provide recruiters with some flexibility and enhanced quality of life and still provide recruiting commands with the ability to control the flow of new recruits into the system on a monthly basis. Better working conditions and recruiter morale could ultimately encourage additional candidates to volunteer for recruiting duty, thereby easing the current burden on recruiting commands to screen and select new recruiters.
Pursuant to a congressional request, GAO reviewed the military services' recruiting processes, focusing on the recruiter incentive systems that the military services use to optimize the performance of military recruiters and ensure that only fully qualified applicants are enlisted. GAO noted that: (1) the Department of Defense (DOD) could enhance the success of its recruiters if the services strengthened key aspects of their systems for selecting and training recruiters; (2) only the Air Force requires personnel experienced in recruiting to interview candidates for recruiting positions and uses selection tests to screen interviewees for recruiting duty; (3) while recruiters from each service receive practical training to improve their ability to recruit and enlist personnel, Marine Corps and Navy training also emphasize the importance of retaining recruits once enlisted and require recruiters to focus on retention as well as recruiting; (4) the services have taken steps to improve their delayed entry programs, such as increasing the amount of contact between recruiters and recruits; (5) although all the services give recruits in the delayed entry programs access to their physical fitness facilities and encourage the recruits to become or stay physically fit, only the Marine Corps conducts regular physical training for recruits who are waiting to go to basic training; (6) although recruits who are physically fit are more likely to complete basic training, only the Marine Corps requires all recruits to take a physical fitness test before reporting to basic training; (7) achieving monthly goals has been the key measure of recruiter performance; (8) only the Marine Corps and the Navy consider retention in measuring and rewarding recruiter performance; (9) specifically, they consider the number of recruits completing basic training when evaluating the success of recruiters; the Army and the Air Force consider primarily the number of recruits enlisted or the number reporting to the basic training; (10) DOD's 1996 survey of service recruiters showed that the number of hours that recruiters work reached its highest point since 1989; (11) despite this effort, less than one-half of the recruiters achieved their goals in 9 or more months of a 12-month period; (12) the recruiters GAO interviewed were concerned about the difficulties they face in meeting monthly goals and the long hours they must devote to their jobs; and (13) establishing quarterly floating goals could ease the burden on recruiters and still provide an incentive to meet recruitment goals.
The judiciary has a total of 94 federal districts located throughout the United States, the Commonwealth of Puerto Rico, and the territories of Guam, the U.S. Virgin Islands, and the Northern Mariana Islands. The 94 districts are organized into 12 regional circuits, each of which has a court of appeals, also known as a circuit court, which hears appeals from its district courts. To help administer the operations of its circuit and district courts, each circuit has a judicial council whose membership includes the chief circuit judge as chair of the council and an equal number of other circuit and district judges. Among other things, the council is authorized to issue orders and establish policies to help ensure that the circuit and district courts function as effectively as possible so that they can provide the public with the effective and expeditious administration of justice. The Judicial Conference, which is chaired by the chief justice of the United States and includes the chief judge of each judicial circuit and a district judge from each circuit, is the judiciary’s national policy-making body. The judiciary and the General Services Administration (GSA) have embarked on a multibillion-dollar courthouse construction initiative to address the space needs of the federal district courts and related agencies. The judiciary’s most recent 5-year construction plan for fiscal years 2002 through 2006 identified new courthouses or annexes that are to be built to accommodate new judgeships created because of increasing caseloads and to replace obsolete courthouses occupied by existing judges. The plan identified a total of 45 proposed courthouse construction projects that are expected to cost a total of about $2.6 billion. This courthouse construction initiative includes plans to construct hundreds of new district judge trial courtrooms to replace inadequate facilities and to accommodate future increases in federal judgeships. The results of our past work on courthouse construction have shown that district judges’ trial courtrooms may be underutilized, and that trial courtrooms are expensive to build. For several years, there has been much debate about whether district judges could share courtrooms—operate in a courthouse with fewer courtrooms than judges—to save taxpayer dollars without compromising effective judicial administration. There has been a belief among various stakeholders outside the judiciary, including some subcommittees and members of Congress as well as the Office of Management and Budget (OMB), that courtroom sharing may be possible and could lead to cost savings. Trial courtrooms, because of their size and configuration, are expensive to build and constructing any unneeded courtrooms would waste taxpayer dollars. On the other hand, the judiciary believes that the availability of a trial courtroom is an integral part of the judicial process because judges need the flexibility to resolve cases efficiently. The judiciary and other key stakeholders believe that the judiciary should retain its one-judge, one-courtroom policy for active district judges to avoid ineffective judicial administration. However, the judiciary has recognized that courtroom sharing may be possible among visiting judges—judges from other locations who temporarily use courtrooms— and senior judges who have reduced caseloads. According to AOC, as of September 2001, the judiciary had a total of 906 district judges, which consisted of 592 active judges and 314 senior judges. In March 1997, the Judicial Conference issued a policy statement that discussed courtroom sharing by district judges and provided guidance for determining the number of courtrooms needed in courthouse facilities. The statement addressed three main topics: (1) courtrooms for active judges, (2) factors to be considered in deciding whether senior judges who do not draw caseloads requiring substantial use of courtrooms could share courtrooms, and (3) planning assumptions to be used in determining the number of courtrooms needed in a facility. As part of its statement, the Judicial Conference recognized the potential for courtroom sharing by some senior judges and asked circuit councils to consider the number of years a senior judge will need a courtroom after taking senior status for which the Judicial Conference recommended a 10-year time frame. However, the policy was very clear that senior judges who had caseloads that required substantial use of courtrooms should have their own courtrooms and that courtroom sharing for any district judge—active or senior—is not required. According to AOC officials, the Judicial Conference requested that each circuit judicial council develop a policy on sharing courtrooms by senior district judges. The Judicial Conference policy statement encouraged circuit judicial councils to develop policies on courtroom sharing only for senior judges whose caseloads do not require substantial use of courtrooms. The Judicial Conference policy clearly stated that one courtroom must be provided for each active judge. Also, the Judicial Conference policy recommended that senior judges retain courtrooms for 10 years after taking senior status because they usually have caseloads sufficient to keep their own courtrooms. Senior judges with more than 10 years in senior status appeared to be the primary candidates for courtroom sharing. As of September 2001, 118 of the judiciary’s 314 senior judges had more than 10 years in senior status. Therefore, about 38 percent of the total number of senior judges would be considered the primary candidates for courtroom sharing. Much of the Judicial Conference policy is devoted to a discussion of the five factors that should be used in deciding whether some senior judges could share courtrooms and the nine assumptions for circuit councils to consider in determining the number of courtrooms needed in their courthouse facilities. For example, one factor was the judicial workload in terms of the number and types of cases anticipated to be handled by each judge. Another factor is the number of years each judge is likely to be located at the facility. Some of the assumptions, which primarily relate to caseload projections and the time frames in which replacement, senior, and new judges will occupy the facility, included (1) the percentage of the total district’s caseload handled at a particular location and (2) the number of years before replacement judges will be on board after a judge takes senior status. The full text of the Judicial Conference policy, including the five factors and nine assumptions, is contained in appendix I. In 1997 and 1998, after the Judicial Conference policy took effect, the 12 circuit councils issued their own policy statements related to the courtroom sharing issue. Our analysis showed that like the Judicial Conference policy, all of the circuit councils’ policies recognized that senior judges who do not draw caseloads requiring substantial use of courtrooms would be candidates for courtroom sharing. Eight of the 12 circuit councils’ policy statements made reference to and used much of the language in the Judicial Conference policy, particularly the language related to the various factors to be used in deciding when senior judges should share courtrooms and the assumptions that should be considered in determining the number of courtrooms needed in courthouse facilities. Some of the eight circuit councils’ policy statements also included general discussions of their approaches to courtroom sharing. For example, the first circuit council policy stated that the council had a strong preference that, wherever feasible, each senior judge should be given a courtroom dedicated to his or her use. However, the first circuit council went on to say that it intended to comply with Judicial Conference policy regarding senior judges sharing courtrooms, where appropriate. The eleventh circuit council took a different approach and explained that the methods employed by district courts for courtroom sharing by senior judges who do not draw caseloads requiring substantial use of courtrooms varied greatly, not only throughout the judiciary, but also within the eleventh circuit. Given this, the district courts were in the best position to formulate courtroom sharing policies that would be most applicable to their local operations. According to AOC, as of September 2001, one of the nine district courts within the eleventh circuit—the Southern District of Florida—had developed its own local courtroom sharing policy. This policy stated that each senior judge will be allowed a courtroom unless courtroom use hours and cases assigned to that judge fall below the caseload requirements within a 5-year period for substantial use of a courtroom. If the senior judge does not maintain a caseload requiring substantial use of a courtroom in the 5-year period, the courtroom will be made available for others to use. The policy also stated that on the basis of historical data, senior judges are expected to occupy a courtroom for 15 years after taking senior status and that this time frame should be used for planning for courtroom needs. AOC was unaware of any other district court policies on courtroom sharing. The policy statements for the four remaining circuit councils—including D.C. and the fifth, seventh, and eighth councils—were not as detailed as the other eight circuit councils’ statements. The four circuit councils primarily issued brief policy statements that generally described their positions on courtroom sharing. For example, the D.C. circuit council policy basically stated that courtrooms should be provided for each active judge and each senior judge who requires substantial use of a courtroom and that courtroom sharing will be achieved on a collegial basis. The fifth, seventh, and eighth circuit councils also issued policy statements that recognized the need for senior judges to maintain courtrooms if such judges continued to perform full judicial assignments. The fifth and eighth circuit councils’ policy statements go on to say that, when appropriate, the circuit council may direct the joint use of courtrooms and adjunct facilities as dockets and other circumstances warrant. The seventh circuit council stated in its policy that decisions as to the assignment of chambers and a courtroom for a senior judge who performs less than a full judicial assignment shall be made by that judge’s court. According to the seventh circuit council’s policy, a full judicial assignment means that the senior judge continues to be assigned and perform the same work, both casework and other assignments, as an active judge of the same court. The Judicial Conference and the circuit councils’ policies did not use actual courtroom use data—how often and for what purpose courtrooms are being used—as criteria for deciding whether senior judges should share courtrooms. Instead, the policies used judges’ caseloads and substantial use of a courtroom as a primary basis for making decisions about senior judges sharing courtrooms. According to AOC, under statute, the decision as to whether senior judges should share courtrooms is left to the discretion of each circuit judicial council. More information from the 12 circuit councils’ courtroom sharing policy statements is included in appendix II. Because the judiciary believes that courtrooms are an integral resource for the administration of justice, judicial policies do not generally encourage widespread courtroom sharing. However, some sharing was occurring in existing facilities among some active and senior district judges. As of December 2001, our analysis of AOC data showed that district judges were sharing courtrooms in 11 facilities. According to AOC, there are a total of 337 federal district court facilities nationwide, but not all of these facilities would be candidates for courtroom sharing. For example, some facilities may have only one judge. Some of the facilities where courtroom sharing was occurring were located in major metropolitan areas, such as Brooklyn, New York, and Philadelphia, Pennsylvania, while others were in smaller cities such as Benton, Illinois, and Fayetteville, Arkansas. Table 1 identifies the locations of the 11 facilities, the number of district courtrooms in these facilities, and the number and types of district judges at these facilities. The facilities varied in the types of district judges involved in courtroom sharing. For example, some facilities had active judges, senior judges with 10 years or less in senior status, and senior judges with more than 10 years in senior status sharing courtrooms, while others had only senior judges with 10 years or less in senior status sharing courtrooms. Table 2 shows the types of district judges who were sharing courtrooms on a regular basis at the 11 facilities. At the 11 facilities, courtroom sharing was occurring because active and senior district judges have to operate in facilities with fewer district courtrooms than district judges. For example, at the Brooklyn facility, courtroom sharing among judges was occurring because of an increase in the number of judges at the facility and the partial demolition of the courthouse complex in preparation for a new courthouse. Another example involved the San Juan facility where, because of space limitations, only one district courtroom was built to accommodate three senior judges. According to AOC officials, construction projects are currently planned or under way at the locations where active judges were sharing courtrooms. Available data show that in 7 of the 11 facilities, each facility had one less courtroom than district judges. For example, the Nashville facility had a total of 6 district courtrooms to accommodate 7 active and senior judges. At the remaining 4 facilities, the differences between the total number of courtrooms and the total number of district judges were greater. Specifically, the Brooklyn facility had a total of 10 district courtrooms to accommodate 15 active and senior judges; the San Juan facility had 1 district courtroom for 3 senior district judges; Jacksonville had 3 district courtrooms for 5 active and senior judges; and Orlando had 4 courtrooms for 6 active and senior judges. The data also showed that 4 facilities had a total of 8 senior judges with more than 10 years in senior status and that 5 of these judges were in the Philadelphia facility. The 8 judges make up a small part of the 118 senior judges with more than 10 years in senior status who were on board as of September 2001. According to AOC officials, there are reasons why some senior judges with more than 10 years in senior status are not sharing courtrooms. For example, although the Judicial Conference policy recommends that senior judges with more than 10 years in senior status would be the primary candidates for sharing courtrooms, such sharing is generally not needed in facilities that have a sufficient number of courtrooms to accommodate judges. Also, some senior judges with more than 10 years in senior status may not require the use of a courtroom. The officials pointed out that if the large number of judicial vacancies was filled, it is likely that more of these senior judges would be sharing courtrooms. According to AOC officials, data to show which of these senior judges have their own courtrooms were not readily available. At the time we completed our audit work, we had received survey responses from 10 of the 11 facilities where courtroom sharing was occurring. One of the facilities—Rapid City, South Dakota—did not respond to our survey. In large part, active judges at the 10 facilities, including some chief judges, generally viewed courtroom sharing as problematic for various reasons. For example, at the Brooklyn facility, which had the greatest difference between the total number of district judges and courtrooms—15 judges and 10 courtrooms—judges reported several problems associated with courtroom sharing. The problems included frequent delays with sentencing convicted defendants and starting lengthy trials; the inability to deal effectively with unforeseen trial events or take full advantage of visiting judge resources; having to hold court proceedings in conference rooms, hearing rooms, and chambers; an inordinate proportion of staff time devoted to scheduling as opposed to case management; and adverse impacts on the court, litigants, private counsel, the U. S. Marshals Service, and the United States Attorney’s Office due to frequent time and location scheduling changes of court proceedings. The chief judge at the Salt Lake City facility reported that the courtroom sharing situation has become more difficult to manage because, recently, the number of senior judges who are sharing a single courtroom has increased from two to three. Thus, judicial officials have the difficult task of either allocating time for the use of this courtroom among three senior judges or finding the judges alternative space in the facility. The chief judge also cited various administrative problems associated with courtroom sharing, such as having to move evidence and equipment from one courtroom to another in the middle of an extended trial, and problems in notifying litigants, counsel, and the public of changes in courtroom locations. Judges from the other facilities also described similar experiences that illustrated courtroom sharing problems and presented their views regarding the negative effects that courtroom sharing has on the efficient and effective administration of justice. In contrast, unlike active judges, some senior judges at various facilities generally believed that courtroom sharing did not pose significant problems for them. Some senior judges mentioned that although they would prefer having their own courtrooms they acknowledged that, in facilities with limited courtroom capacity, the sharing of courtrooms was particularly appropriate for senior judges with reduced caseloads. For instance, at the Nashville facility, three senior judges—all of whom have reduced caseloads—share the use of two courtrooms. One of the judges explained that, by working collegially together along with proper advanced planning, the judges have a courtroom sharing process that has generally worked well. Also, at the Benton facility, the senior judge often shares a courtroom with a magistrate judge and sometimes shares a courtroom with a bankruptcy judge. The senior judge said that, for the most part, courtroom sharing at the Benton facility has posed no major problems mainly because he has a reduced caseload, and any minor problems with the scheduling of courtrooms at the Benton facility have always been worked out amicably. This senior judge said that he knows how convenient it is for a judge to have his or her own courtroom and that one becomes very possessive about it. He went on to say that judges use courtrooms only part of the time and sharing can almost always be accomplished with proper scheduling and without any negative impact on the efficient and effective administration of justice. In addition to their comments about specific courtroom sharing experiences, some judges who are sharing courtrooms provided their views on the concept of courtroom sharing and how such sharing could affect courthouse operations and the administration of justice. For instance, at the Orlando facility, a judge stated that, in the real world, courtroom sharing leads to delays of justice, interference with management of the court’s caseload, and erosion of collegiality in a district that has frequent hearings and trials. The judge went on to say that in a district such as Orlando, with a heavy caseload and frequent trials, the number and length of trials cannot be controlled and the number, length, and timing of hearings cannot be predicted; therefore, courtroom sharing becomes an impediment to the dispensing of justice. More information on various judges’ courtroom sharing experiences at 10 facilities and their views about courtroom sharing is included in appendix III. The system works fairly well, but there have been problems. We do have three district judges, and we all try criminal cases. So far, we have been able to schedule matters so that the two large courtrooms are utilized for criminal matters and so far we have not had a situation where all three district judges needed to utilize the large courtrooms for big criminal trials. That would create a very real problem. The courtroom trading is not perfect. It does involve coordination among five judges, some travel from one floor to another, which disrupts staff, and it does often present difficulties in scheduling matters on an emergency basis. Another unique courtroom sharing experience that AOC identified involves the Little Rock, Arkansas, facility. Judicial officials at the Little Rock facility reported that it had a total of 12 judges—5 active judges, 2 senior judges, and 5 magistrate judges—and 11 courtrooms. According to AOC, in the summer of 1998, one of the senior judges decided to take a reduced workload and to give up his courtroom so that two of the magistrate judges would not have to share a courtroom. This senior judge now shares courtrooms with the other district judges. Judicial officials at the Little Rock facility pointed out that courtroom sharing for a senior judge with a 30 percent caseload has not had any negative effect on the efficient and effective administration of justice. However, the officials stated that, occasionally, it is difficult to schedule a courtroom for one senior district judge, even with the availability of other district courtrooms, because cases are scheduled months in advance, and it can be difficult to identify which courtroom, if any, would be available. In addition, the Little Rock officials mentioned that there are security concerns with courtroom sharing in older facilities, which were not designed to be used exclusively as courthouses. Specifically, they stated that there is no separate, secured circulation for judges and prisoners and both must use the same public hallways as other parties in the cases. In addition to the Rochester and Little Rock facilities, AOC identified two other facilities—Austin, Texas, and Sioux Falls, South Dakota—where courtroom sharing was occurring. According to the district court clerk, the situation at the Austin facility is unique because one of the two senior judges who shares a courtroom at this facility, which is located in the Western District of Texas, is an Eastern District of Texas judge who has been designated by the fifth circuit judicial council to reside at the Austin facility. This senior judge hears cases not only at the Austin facility, but also he may hear cases at any facility in the Western District of Texas. One of the senior judges at the Austin facility explained that he and the other senior judge share the facility’s district courtroom not only with each other, but also with judges from the court of appeals. The judge further stated that courtroom sharing has had no negative effect on the efficient and effective administration of justice mainly because the judges have been resourceful and flexible in scheduling the use of the courtroom. However, the judge mentioned that, on several occasions, he has had to use a bankruptcy courtroom in the Austin facility to conduct district court proceedings or reschedule a matter to avoid conflicts. At the Sioux Falls facility, courtroom sharing was no longer occurring because, according to the district court clerk, a senior judge had become inactive at the end of 2001. In addition to district judges who reside at a facility, some district judges travel outside their districts to hear cases in another district. According to AOC officials, these judges—commonly referred to as visiting judges— temporarily use the courtroom of a judge at the location visited. AOC considers this use of courtrooms by visiting judges as a form of courtroom sharing that would have some impact on the availability of courtrooms, but the full extent of this impact is unknown. During fiscal year 2000, AOC data indicated that judges visited and conducted judicial business on 35 occasions in the districts where the 11 facilities were located. The chief judge for the Middle District of Florida reported that, during calendar years 1999 through 2001, an average of about 15 judges visited the district each year, and their visits usually ranged from 2 to 6 weeks. Also, during January and February 2002, the district received assistance from 8 visiting judges whose visits typically ranged from 2 to 6 weeks. The chief judge pointed out that during this time period, the district had to occasionally decline offers of assistance from some visiting judges because the district had no courtrooms available for these judges to use. AOC had no readily available data to quantify how often and for how long visiting judges used other judges’ courtrooms in all districts. In addition to the courtroom sharing currently taking place, the judiciary also has plans for courtroom sharing in some future courthouse construction projects. The judiciary’s updated long-range plan—Five-Year Courthouse Construction Plan (Fiscal Years 2002-2006)—contained 45 proposed new courthouse construction projects. The judiciary had completed courtroom needs assessment studies for 33 of the 45 projects. These studies estimate the number of courtrooms that will be needed for 10 years after a project’s anticipated design date. Of the 33 courtroom needs studies, 19 indicated that some courtroom sharing was anticipated to be occurring at the end of the planning time frame, and the remaining 14 studies did not. Our analysis showed that the 19 proposed courthouse projects are expected to have 113 active judges, 90 senior judges, and 158 courtrooms at the end of the 10-year planning time frame. This equates to about 5 judges for every 4 courtrooms. Consistent with the Judicial Conference policy, courtroom sharing in these projects is expected to involve senior judges with more than 10 years in senior status. Specifically, the plans have 44 senior judges sharing courtrooms in these projects, and all of these judges will have more than 10 years in senior status. Our analysis also showed that three senior judges with over 10 years in senior status at these projects were not scheduled to share because courtrooms were available for these judges to use. Table 3 provides more information on the number of district judges and courtrooms at these 19 locations that are included in the judiciary’s long-range plan for fiscal years 2002 through 2006. The 14 projects that did not include courtroom sharing over the 10-year planning time frame generally did not anticipate having senior judges with more than 10 years in senior status—a key criterion for determining if courtroom sharing should occur. According to AOC officials, senior judges at these projects may very well be sharing courtrooms after the 10-year planning period. For the 4 projects that anticipated having senior judges with more than 10 years in senior status, courtrooms were not shared because of specific circumstances at those locations. For example, in Anniston, Alabama, the senior judge will be the only judge at the facility and thus will be assigned the facility’s only courtroom, but visiting judges also hear cases at the Anniston location. Table 4 provides more information on the anticipated number of district judges and courtrooms planned for these 14 locations. The judiciary plans to incorporate courtroom sharing in some future courthouse construction projects, but the amount of sharing that will take place within the 10-year planning time frame at these courthouse locations will depend on how well the planning assumptions used to estimate courtroom needs materialize. For example, in planning for courtroom sharing at a facility, the judiciary assumed that the number of new judge positions created by law and the appointment and confirmation of the judges to fill positions will be timely. This assumption may not be fully realized. Past experience indicates that creating new judge positions and appointing and confirming judges to fill positions may not always be timely. For example, from 1976 through 2001, data provided by AOC officials showed that the Judicial Conference had requested new judge positions 14 times. However, during this 25-year time period, Congress enacted legislation to increase the number of new judge positions only five times—specifically, in 1978, 1984, 1990, 1999, and 2000. In addition, getting judges appointed and confirmed to fill judge positions has been no easy task. The difficulty is demonstrated by the length of time that some judge positions have been vacant. For example, on November 23, 2001, there were 102 judge vacancies, of which 29 had been open for more than 2 years. In fact, 1 of the 29 vacancies had been open for more than 7 years. The timing of legislation creating new judge positions, and the length of time it takes for the appointment and confirmation of judges to fill positions will influence the extent to which and when courtroom sharing will occur. Another courtroom planning assumption the judiciary has used is that all active judges will opt for senior status within the first year of eligibility, which, according to AOC officials, is generally when judges reach at least 65 years of age and have 15 years of service. Under the judiciary’s courtroom needs assessment studies, when an active judge elects to take senior status, a facility needs two courtrooms—one for the new senior judge and one for the active judge that will replace him or her. However, if an active judge defers taking senior status when he or she becomes eligible, the facility will need a courtroom for only one judge at that time, which reduces the need for courtroom sharing. In July 2001, AOC issued a memorandum to the chief justice and members of the Judicial Conference that, among other things, identified trends in the timing of judges’ decisions to take senior status. To identify these trends, AOC examined available data on all judges eligible to take senior status from 1984 through 2000 and reported that of 579 judges, 355, or about 61 percent, took senior status within 1 year of eligibility. AOC further reported that from 1984 through 1995, 27, or about 7 percent, of 388 judges deferred taking senior status for more than 5 years after they became eligible. The extent to which judges defer taking senior status can directly affect the amount of courtroom sharing that will actually take place. In preparing courtroom needs assessment studies, the judiciary estimates which judges will be at the facilities over the course of the 10-year planning cycle. The studies identify the judges who will be provided their own courtrooms, and the judges who will not be provided courtrooms because they will be expected to share courtrooms. In this type of estimate, there will always be some uncertainty associated with trying to predict which judges will be at the facilities, especially senior judges with more than 10 years in senior status. The 44 senior judges who are indicated in the studies as sharing courtrooms at the end of the planning cycle will have more than 10 years in senior status and will be from 75 to 98 years old. As with any planning process, the assumptions that have to be made will always involve some uncertainty. In this instance, the extent to which these senior judges continue to serve will affect how much and when courtroom sharing actually occurs. On March 5, 2002, AOC’s associate director provided written comments on a draft of this report and generally agreed with the information contained in the report. AOC also provided additional information on the judiciary’s courtroom sharing efforts. In its comments, AOC said that the information in our report on judges’ courtroom sharing experiences confirms the judiciary’s position that courtroom sharing is feasible only in limited circumstances. The information we obtained from judges at the facilities where courtroom sharing was occurring was limited to those facilities and was only intended to describe the judges’ experiences with and views on courtroom sharing. Given this, the information cannot be generalized to facilities in all judicial districts. Furthermore, we did not attempt to determine the extent of the courtroom sharing problems cited by judges or whether those problems could have been mitigated by such means as courthouse design changes, use of different scheduling practices, or additional staff training. We clarified the report’s scope and methodology to better reflect these limitations. AOC also raised some points about the report, which we believe need further discussion. A discussion of these points and a copy of AOC’s written comments are included in appendix IV. On February 27, 2002, AOC provided oral technical comments on a draft of this report, which we incorporated, where appropriate. To meet the first objective, which was to examine the judiciary’s courtroom sharing policies for senior judges, we obtained, analyzed, compared, and contrasted the various judicial policies regarding courtroom sharing. Specifically, we examined the Judicial Conference policy, which is in the U.S. Courts Design Guide, and the individual policies established by the circuit judicial councils and discussed them with AOC officials. To meet the second objective, which was to obtain information about the extent to which senior judges are currently sharing courtrooms and their experiences with courtroom sharing, we worked with AOC staff to identify the locations where district judges were sharing courtrooms. We also used a brief survey document and follow-up telephone calls to contact the district court clerks at these locations and collect information on the district judges who were sharing courtrooms. In addition, we solicited information about the judges’ experiences with and views on courtroom sharing. We analyzed the information obtained and discussed the results of our work, as necessary, with judiciary officials. We did not attempt to determine the extent of the courtroom sharing problems cited by judges or whether the problems could have been mitigated by such means as courthouse design changes, use of different scheduling practices, or additional staff training. Furthermore, the results of our work can be applied only to the facilities discussed in the report and, therefore, cannot be generalized to facilities in all judicial districts. To meet the third objective, which was to determine the judiciary’s efforts to explore the potential for senior judges to share courtrooms in future courthouse construction projects, we reviewed the methodology that the judiciary used to prepare its courtroom needs assessment studies and analyzed the studies that had been completed for 33 of the 45 proposed projects in the judiciary’s long-range construction plan for fiscal years 2002 through 2006. We identified the projected number of judges and courtrooms planned for each of the 33 projects and analyzed the studies to determine how much sharing was planned for these projects. Our analysis focused on identifying the active and senior district judges who were expected to be permanently assigned to the 33 projects and the senior judges who were expected to share courtrooms on a regular basis at these projects. We did not include visiting and rotating judges in our analysis of these studies because their visits are temporary in nature and usually for short periods of time. In addition, we reviewed the legislation increasing the number of district judges and data on the ages of senior judges expected to share courtrooms in the 33 projects. We discussed our results with AOC officials. To obtain general information related to all of our objectives, we reviewed previous studies on or related to courtroom sharing that were done by us, AOC, the Rand Institute for Civil Justice, the Federal Judicial Center, and private consulting groups and discussed the courtroom sharing issue with AOC representatives. We did our work from June through December 2001 in accordance with generally accepted government auditing standards. On March 5, 2002, AOC provided written comments on a draft of this report. On February 27, 2002, AOC provided oral technical comments on a draft of this report, which we incorporated, where appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of the report until 15 days from the report date. At that time, we will send copies of this report to appropriate congressional committees; the director, AOC; the director, OMB; and the administrator, GSA. Copies also will be made available to others upon request. Major contributors to this report were William Dowdal, Anne Hilleary, Gerald Stankosky, and John Vocino. If you or your staff have any questions, please contact me on (202) 512-8387 or at ungarb@gao.gov. In March 1997, the Judicial Conference of the United States adopted a policy statement that provided guidance for determining the number of courtrooms needed in facilities. The policy was included in the December 1997 version of the U.S. Courts Design Guide. As stated in the Guide, the policy encourages courts to take several factors into account when considering the construction of additional courtrooms. Also, the policy encourages circuit judicial councils to develop policies on courtroom sharing by senior judges when such judges do not draw caseloads requiring substantial use of courtrooms. The complete text of the policy statement follows. “Recognizing how essential the availability of a courtroom is to the fulfillment of the judge’s responsibility to serve the public by disposing of criminal trials, sentencing, and civil cases in a fair and expeditious manner, and presiding over the wide range of activities that take place in courtrooms requiring the presence of a judicial officer, the Judicial Conference adopts the following policy for determining the number of courtrooms needed at a facility: “With regard to district judges, one courtroom must be provided for each active judge. In addition, with regard to senior judges who do not draw a caseload requiring substantial use of a courtroom, and visiting judges, judicial councils should utilize the following factors, as well as other appropriate factors, in evaluating the number of courtrooms at a facility necessary to permit them to discharge their responsibilities. An assessment of workload in terms of the number and types of cases anticipated to be handled by each such judge; The number of years each such judge is likely to be located at the facility; An evaluation of the current complement of courtrooms and their projected use in the facility and throughout the district in order to reaffirm whether construction of an additional courtroom is necessary; An evaluation of the use of the special proceedings courtroom and any other special purpose courtrooms to provide for more flexible and varied use, such as use for jury trial; and An evaluation of the need for a courtroom dedicated to specific use by visiting judges, particularly when courtrooms for projected authorized judgeships are planned in the new or existing facility. “In addition, each circuit judicial council has been encouraged by the Judicial Conference to develop a policy on sharing courtrooms by senior judges when a senior judge does not draw a caseload requiring substantial use of a courtroom. “The following assumptions, endorsed by the Judicial Conference in March 1997, should be considered to determine courtroom capacity in new buildings, new space, or space undergoing renovation. This model allows assumptions to be made about caseload projections, and the time frames in which replacement, senior, and new judgeships will occupy the facility. The model affords flexibility to courts and circuit judicial councils when making decisions about the number of courtrooms to construct in a new facility, since adjustments to the assumptions can be made to reflect a specific housing situation ‘on-line.’ The number of new judgeships approved by the Judicial Conference and recommended for approval by Congress, and the year approval is expected; The number of years senior judges will need a courtroom after taking senior status (a ten-year time frame is recommended); The average age of newly-appointed judges at the court location; Caseload projections based upon the district’s long range facility plan (other caseload measures such as raw or weighted filings might also be considered); The percentage of the total district caseload handled at the location; The ratio of courtrooms per active and senior judge (at present the model assumes a ratio of one courtroom per judge); The number of years it will take for a new judgeship to be approved by the Judicial Conference and Congress once weighted filings reach the level that qualifies a court for an additional new judgeship (a three-year time frame is recommended); The number of years before replacement judges will be on board after a judge takes senior status (a two-year time frame is recommended); and The year the judges are expected to take senior status once they become eligible (a court or council should assume a judge will take senior status when eligible). “The planning assumptions described above are subject to modification by courts in consultation with the respective judicial council.” Information from the circuit council policy statement: The first circuit council strongly preferred that, wherever feasible, each senior judge be given a courtroom dedicated to his or her own use. However, the council intended to comply with Judicial Conference policy regarding senior judges sharing courtrooms in those cases where appropriate. The first circuit council suggested that, when courtrooms are designed, consideration should be given to making district, magistrate, and bankruptcy courtrooms interchangeable where permitted by program requirements. The decision as to whether to include one or more visiting judge’s chambers in a new construction or substantial renovation project would depend upon an evaluation of the caseload to be handled in that location. Date of circuit council policy statement: February 11, 1998. Information from the circuit council policy statement: In determining whether to provide dedicated courtrooms for senior and visiting judges in new construction and alteration projects, the second circuit council will consider all relevant factors, as applicable, including the Judicial Conference policy’s courtroom sharing factors. Date of circuit council policy statement: October 20, 1997. Information from the circuit council policy statement: For the construction of new facilities and major renovations of existing judicial facilities, the third circuit council stated that, in evaluating the number of courtrooms needed at such facilities to permit senior and visiting judges to discharge their responsibilities, the council will consider the Judicial Conference’s courtroom sharing factors in concert with other factors it deems appropriate. The third circuit council also described the overall process to be used in preparing, reviewing, and approving district courts’ proposed plans for determining the number of courtrooms in the construction or major renovation of judicial facilities. This process included the use of a computer model planning document developed by the Administrative Office of the U.S. Courts (AOC), which the third circuit council plans to use as a tool to help district courts explain and justify their proposals for the construction of courtrooms in new facilities. Date of circuit council policy statement: April 6, 1998. Information from the circuit council policy statement: For the construction of new facilities and major renovations of existing judicial facilities, the fourth circuit council stated that, in evaluating the number of courtrooms needed at such facilities to permit senior and visiting judges to discharge their responsibilities, the council will consider the Judicial Conference’s courtroom sharing factors in concert with other factors it deems appropriate. The fourth circuit council also described the overall process to be used in preparing, reviewing, and approving district courts’ proposed plans for determining the number of courtrooms in facilities. This process included the use of a computer model planning document developed by AOC to help district courts explain and justify their proposals for the construction of courtrooms in new and existing judicial facilities. Date of circuit council policy statement: October 10, 1997. Information from the circuit council policy statement: The fifth circuit council provided its December 1990 and May 1995 resolutions on courtroom sharing as its policy statement. In the resolutions, the fifth circuit council stated that the council may direct, when appropriate, the joint use of courtroom and adjunct facilities as dockets and other circumstances warrant. The fifth circuit council also addressed other matters related to providing chambers and staff and, if required, courtrooms to judges who plan to take senior status. Among the matters addressed are the following: When the fifth circuit council has been advised that an active judge intends to take senior status and continue working at a level that qualifies under the council’s guidelines for the assignment of chambers and staff, the council will take the immediate and necessary steps to provide appropriate senior judge chambers, and, if required, courtroom facilities. Unless special circumstances cause the fifth circuit council to direct otherwise, a judge taking senior status, whose replacement will have the same official duty station, is to make available to the newly appointed active judge the chambers and facility used during the period of active service. If two or more senior judges are occupying active judge chambers, the determination of which of those active judge chambers is to be occupied by the newly appointed active judge shall be made by the court in question. Date of circuit council policy statement: December 9, 1997. Information from the circuit council policy statement: The sixth circuit council stated that a separate courtroom will not be provided for each senior district judge or for visiting judges unless the council determines, after consideration of the Judicial Conference’s courtroom sharing factors, among others, that a separate courtroom is necessary for the senior or visiting judge to discharge his or her responsibilities. Date of circuit council policy statement: September 30, 1997. Information from the circuit council policy statement: The seventh circuit council stated that each senior judge who is designated and assigned to perform judicial duties shall be entitled to suitable chambers, including furnishings and supplies, and, if applicable, suitable courtroom facilities. A senior judge who continues to perform a full judicial assignment should not be required to give up the chambers or courtroom the judge occupied in active status, except to the extent that lack of facilities for new judges requires sharing of facilities. In that case, senior judges with a full judicial assignment should be treated the same as active judges in the determination of a sharing arrangement. Decisions as to the assignment of facilities for senior judges who perform less than a full judicial assignment shall be made by that judge’s court. The council defined the term “full judicial assignment” to mean that the senior judge continues to be assigned and perform the same work, both casework and other assignments, as if he or she were an active judge of the same court. Date of circuit council policy statement: April 15, 1998. Information from the circuit council policy statement: When the eighth circuit council has been advised that an active judge intends to take senior status and continue working at a level that qualifies under the council’s guidelines for the assignment of chambers and staff, the council will take immediate and necessary steps to provide appropriate senior judge chambers and, if required, courtroom facilities. The eighth circuit council may direct, when appropriate, the joint use of a courtroom and adjunct facilities as dockets and other circumstances warrant. Date of circuit council policy statement: March 19, 1998. Information from the circuit council policy statement: The ninth circuit council stated that the courtroom sharing factors and planning assumptions are to be used as guidelines and may be modified on the basis of unique circumstances of each district and that they shall be used when the districts update long-range plans and prepare requests for adding/releasing space. The ninth circuit council further stated the following: Each district is encouraged to develop a local policy to address senior and visiting judges sharing courtrooms. The policy is to be provided to the circuit council’s Space and Security Committee. The policy shall be submitted when the district requests additional courtroom space or releases courtroom space. The ninth circuit council policy also stated that when considering the need for new courtrooms, districts shall consider the factors discussed in the report prepared by the council’s Space and Security Committee task force that affect the projection of courtroom needs, and should take into consideration using space for multiple purposes to the extent feasible and with consideration of both initial and long-term fiscal impacts. Date of circuit council policy statement: October 22, 1997. Information from the circuit council policy statement: The tenth circuit council stated that one courtroom should be provided for each senior judge who draws a caseload requiring substantial use of a courtroom. The tenth circuit council also stated that in determining the number of courtrooms in existing facilities for senior district judges who do not draw caseloads requiring substantial use of courtrooms, it will consider not only the Judicial Conference policy factors but also the availability of courtrooms and the feasibility or nonfeasibility of releasing courtroom space to the General Services Administration. Date of circuit council policy statement: September 25, 1997. Information from the circuit council policy statement: In adopting the Judicial Conference’s factors and assumptions, the eleventh circuit council discussed in its policy statement two major topics related to the courtroom sharing issue: (1) the process to be used in planning for the number of courtrooms in new facilities and (2) courtroom availability and sharing. Planned number of courtrooms in new facilities. In addressing this topic, the eleventh circuit council described the overall process to be used in preparing, reviewing, and approving district courts’ proposed plans for determining the number of courtrooms in new judicial facilities. This process included the use of a computer model planning document developed by AOC, which the eleventh circuit council plans to use as a tool to help district courts explain and justify their proposals for the construction of courtrooms in new judicial facilities. Courtroom availability and sharing. In discussing this topic, the eleventh circuit council recognized that methods varied greatly throughout the judiciary and within the eleventh circuit for the sharing of courtrooms by senior judges who do not draw caseloads requiring substantial use of courtrooms. The council further recognized that district courts were in the best position to determine the need for courtrooms in facilities and the number of courtrooms that are necessary to ensure the fair, efficient, and expeditious handling of civil and criminal cases. Thus, the eleventh circuit council determined that, at the present time, it would not adopt a written courtroom sharing policy, but the council directed each district court to submit no later than January 1, 1998, a written report that described the district court’s local situation and the courtroom sharing policy that the district court adopted to meet its own local needs. As of September 2001, AOC identified one of nine district courts within the eleventh circuit—the Southern District of Florida—that had developed its own local courtroom sharing policy. This policy stated that each senior judge will be allowed a courtroom unless courtroom use hours and cases assigned to that judge fall below the caseload requirements within a 5-year period for substantial use of a courtroom. If the senior judge does not maintain a caseload requiring substantial use of a courtroom in the 5-year period, the courtroom will be made available for others to use. The policy also stated that, on the basis of historical data, senior judges are expected to occupy a courtroom for 15 years after taking senior status, and that this time frame should be used for planning for courtroom needs. The eleventh circuit council also discussed other matters related to courtroom availability and sharing. Specifically, the council stated that the availability of a judge to hear a case and a courtroom within which to conduct a trial or hearing are the two principal elements that drive settlements or pleas and that, statistically, settlements or pleas are the means by which most controversies are concluded. The council recognized that current statistics on courtroom use do not adequately capture these activities and that better data must be collected in this area. In discussing this topic, the eleventh circuit council cited the May 1997 GAO report in which some data were captured that attempted to indicate the overall use of courtrooms, such as the actual number of hours that a courtroom was in use (i.e., whether the courtroom’s lights were “on” or “off”). In an attempt to obtain more information on courtroom use, the eleventh circuit council required that district courts provide data that will more accurately reflect courtroom activities, including such data as when a courtroom has been “booked” for a trial (i.e., case set for trial); the number of days a trial is anticipated to take; and how a case was terminated (e.g., trial, plea, or settlement). The council stated its belief that this type of information would provide the hard data that will enable various stakeholders, including Congress, GAO, and the public, to understand the appropriate functions that a courtroom—even a seemingly “dark courtroom”—plays in the administration of the judicial system. Date of circuit council policy statement: October 22, 1997. Information from the circuit council policy statement: The judges of the U. S. District Court for the District of Columbia unanimously determined that a courtroom should be provided for each active judge and each senior judge who requires substantial use of his or her courtroom; and courtroom sharing will be achieved on a collegial basis, as is the tradition of the judges of the court. The judicial council for the District of Columbia circuit supported the district court’s determination. Approximate length of time courtroom sharing has been ongoing at the facility: 1.5 years. Reported experiences of active and/or senior judges at facilities where courtroom sharing was occurring: Positive experiences: None reported. Negative experiences: When two or more senior judges need a courtroom at the same time, the conflict is resolved by giving precedence to the judge who first scheduled the courtroom. When two proceedings must occur at the same time, the courtroom deputy must negotiate with all the judges to accommodate their needs. The typical solution is for one of the judges to use the court of appeals courtroom, which creates two difficulties. First, using this courtroom involves an extra level of coordination because the appellate court controls the courtroom, and the courtroom may not be available every time that it is needed. Second, the court of appeals courtroom does not have the computers installed or network connections needed for district trials. Thus, the courtroom deputy cannot accomplish needed tasks that must be accomplished during proceedings. In addition, the court of appeals courtroom does not have a digital recording system so court reporter services, which are very hard to find in the Puerto Rico area, must be contracted. General comments: The chief judge said that sharing courtrooms is not the best way to run trials efficiently because of the unpredictable nature of trial proceedings. Therefore, judges should have separate courtrooms for motions and hearings. If conflicts occur with the senior judges’ courtroom sharing situation, these conflicts would be exacerbated if active judges with full schedules also needed to share the facility’s courtrooms. The chief judge also believes that one courtroom for three judges does not promote efficiency in judicial proceedings. Types of district judges sharing courtrooms on a regular basis: Active judges. Senior judges with 10 years or less in senior status. Approximate length of time courtroom sharing has been ongoing at the facility: 4.5 years. Reported experiences of active and/or senior judges at facilities where courtroom sharing was occurring: Positive experiences: None reported. Negative experiences: Sentencing of convicted defendants, both incarcerated and on bail, is frequently delayed. Lengthy trials are often delayed, even though ready for trial, due to the inability to obtain a courtroom for the anticipated time required. Severe security concerns because U.S. Marshals Service personnel are forced to transport prisoners through public corridors. Proceedings involving defendants in custody are frequently conducted in conference rooms, hearing rooms, and chambers. Inability to take full advantage of visiting judge resources. Inordinate proportion of staff time devoted to scheduling as opposed to case management. Inability to effectively deal with unforeseen trial events. Any delay whatsoever in a scheduled trial proceeding affects the schedules of at least two other judges, which results in cascading delays. Frequent time and location scheduling changes in court proceedings adversely affect the court, litigants, private counsel, the U.S. Marshals Service, and the United States Attorney’s Office. Proceedings delayed or interrupted, security breached; technology duplicated or compromised; housekeeping deteriorated. One judge commented that he has had to switch courtrooms in midtrial, causing lawyers tremendous inconveniences such as having to move file cabinets and large exhibits. The judge also stated that he has had to postpone a late-day detention hearing because another judge needed the courtroom, causing the defendant to spend an additional night in jail. The judge commented further that with no “home” courtroom, he cannot keep all the books and materials he would otherwise have in court; thus, he often does not have a resource that he would use in helping him make decisions. General comments: A district judge stated that when a judge is engaged in a trial of long duration, as is frequently the case in the Brooklyn district court, courtroom sharing is patently impossible. The judge commented that when he or she is not engaged in a trial, the day-to-day work of a district judge in a busy metropolitan court consists of a dizzying array of various matters, such as motions argued in civil and criminal cases; arraignments; pleas; sentencings; modification of bail hearings; and orders to show cause that may require immediate attention, such as those seeking temporary restraining orders or preliminary injunctions, Title 3 wire tap applications, and violation of bail or supervised release hearings. Such proceedings are held at intervals over an entire day and make courtroom sharing difficult, if not impossible. Types of district judges sharing courtrooms on a regular basis: Active judges. Senior judges with 10 years or less in senior status. Senior judges with more than 10 years in senior status. Approximate length of time courtroom sharing has been ongoing at the facility: 12 years. Reported experiences of active and/or senior judges at facilities where courtroom sharing was occurring: Positive experiences: None reported. Negative experiences: The chief judge stated that active and senior judges have voiced their concerns that, on too many occasions, they cannot schedule proceedings on a regular, open- ended basis in the same courtroom. This situation has caused some confusion and consternation for the attorneys, jurors, and litigants who use the courtrooms on a shared basis and has also resulted in the wasteful and inefficient use of court support staff who must deal with the uncertainty of scheduling proceedings in courtrooms yet “to be announced.” General comments: The chief judge stated that, since 1989, active and senior judges have shared available courtrooms while a number of major construction projects were being completed. Since completion of the projects, sharing regularly occurs only in one courtroom. However, the chief judge expects that over the next few years, the district court will expand, and a major space crisis will occur because the present courthouse has reached its limit for accommodating judges. It is anticipated that more courtroom shortages will occur, causing more active and senior judges to have to share courtrooms. The chief judge said that although the judges have been very understanding of the situation and very cooperative in arranging their calendars to cope with courtroom shortages, judges generally felt that permanently assigned courtrooms greatly improve courtroom management, increase the efficiency of judges and support staff, and expedite the timely administration of justice. Without the stability of permanently assigned courtrooms, some judges are concerned that the public’s perception of the judiciary as an independent branch of government suffers when judges are compelled to share courtrooms. In addition, the chief judge stated that the judges view a dedicated courtroom as a catalyst for the resolution of litigation. A judge’s ability to schedule promptly a proceeding in a dedicated courtroom often results in the resolution of litigation disputes. The chief judge likened the availability of a dedicated courtroom to the availability of an ambulance or a fire engine. Although neither of these items is in constant use, both are essential for the expeditious delivery of safety and health services to citizens on an as-needed basis. Types of district judges sharing courtrooms on a regular basis: Senior judges with 10 years or less in senior status. Approximate length of time courtroom sharing has been ongoing at the facility: 3 years. Reported experiences of active and/or senior judges at facilities where courtroom sharing was occurring: Positive experiences: The chief judge for the Nashville district court and a senior judge stated that the senior judges in Nashville are very collegial and operate with reduced caseloads, so courtroom sharing has worked. The chief judge said courtroom sharing has not posed problems for these judges. Through proper planning and mutual sacrifice, the three senior judges have worked together nicely to utilize two courtrooms. One senior district judge went on to say that each of the three senior judges has a schedule of 6 weeks in courtroom and 3 weeks in chambers. If one of the senior judges has a multidefendant case that requires a larger courtroom, he or she can swap with an active district judge. Scheduling cases 6 to 9 months in advance is a reason that courtroom sharing has worked well. Also, one of the senior judges said that, if a senior judge occasionally has a case that exceeds the allotted time in the courtroom, the senior judges work it out. Negative experiences: The chief judge said that on occasion, the courtroom sharing arrangement for the senior judges does not work because litigation involves unpredictable variables, such as ancillary hearings; trial length; and availability of attorneys, witnesses, and jurors. When such events occur and a judge not scheduled for the courtroom needs one, the affected judge is forced to try to find an available active judge’s courtroom or postpone the trial or proceeding. In some cases, this has necessitated one judge moving his hearings to another facility in Columbia, Tennessee. General comments: The chief judge and a senior judge stated that courtroom sharing among active district judges would not be a good idea. The chief judge said such sharing would be inefficient, costly, and time-consuming and would defeat the purpose of personalized case management. Active judges handle all sorts of trials, motion hearings, emergency requests for injunctions or temporary restraining orders, guilty pleas, sentencings, suppression hearings and a variety of other hearings, as well as conferences. The time frames of such proceedings are often unpredictable, and many arise with short or no warning. Courtroom sharing would severely affect the judge’s ability to move his cases through the judicial process in an efficient and effective manner. The attorneys and public also would suffer greatly. The chief judge also stated that, as an active judge, he is in his courtroom conducting legal business almost every day. The courtroom deputy schedules cases in advance for most weeks throughout the year. The chief judge went on to say that he could not imagine sharing his courtroom with another judge on a regular basis without drastically sacrificing his productivity and efficiency. A senior judge stated that active judges often need their courtrooms on short notice and have longer cases, which makes courtroom sharing more difficult to manage. Types of district judges sharing courtrooms on a regular basis: Active judges. Senior judges with 10 years or less in senior status. Approximate length of time courtroom sharing has been ongoing at the facility: 9 years. Reported experiences of active and/or senior judges at facilities where courtroom sharing was occurring: Positive experiences: The senior judge at the Benton facility said that there have been no major problems with courtroom sharing. The only problems have involved scheduling the use of courtrooms; but so far, such scheduling has always been worked out amicably. An important reason for the lack of problems is that the senior judge has a reduced caseload. Negative experiences: None reported. General comments: The senior judge at the Benton facility stated that he has been on the federal bench for 29 years and knows how convenient it is for a judge to have his or her own courtroom— one becomes very possessive about it. However, he said the fact of the matter is that when all is said and done, judges only use courtrooms part of the time and sharing can almost always be accomplished with proper scheduling and without any negative impact on the efficient and effective administration of justice. Also, according to the senior judge, the Benton facility has one district courtroom, one magistrate courtroom, and one bankruptcy courtroom. The district judges also use the latter two courtrooms, although the bankruptcy courtroom is suitable only for motion hearings or nonjury cases because it does not have a jury box. In addition, the active judge at the Benton facility said that using the courtrooms has worked very well and that all judges have been able to coordinate the use of these courtrooms. Types of district judges sharing courtrooms on a regular basis: Senior judges with 10 years or less in senior status. Approximate length of time courtroom sharing has been ongoing at the facility: 3 years, 1 month. Reported experiences of active and/or senior judges at facilities where courtroom sharing was occurring: Positive experiences: The senior judge thinks that given his situation, courtroom sharing is a good idea. Negative experiences: None reported. General comments: The chief judge said that the Fayetteville facility has a district courtroom and a bankruptcy courtroom. The chief judge uses the district courtroom, and the senior judge and the bankruptcy judge use the bankruptcy courtroom. The senior judge decided that given his caseload and a desire to minimize scheduling conflicts with the district courtroom, it would be best for him to share the bankruptcy courtroom. The bankruptcy judge was receptive to this arrangement. Although the chief judge reported no problems with this arrangement, he said that he had strong concerns about the notion of sharing courtrooms among active and senior judges in the Western District of Arkansas. He said that his district’s experience at its Hot Springs facility cast doubts on the practicality of sharing. This facility has one courtroom and no assigned judge. Four judges—two district, one magistrate, and one bankruptcy—have had proceedings there at the same time. This experience has been unsatisfactory to all involved. The chief judge explained that, if attempts to schedule cases and coordinate the use of one courtroom by multiple, nonresident judges are difficult, the problems would be exacerbated if district judges had to share a courtroom where they were in residence. The chief judge believes that convenience and efficiency in handling the court’s dockets are decidedly reduced in the Hot Springs facility, and other facilities, where courtrooms are shared among multiple, nonresident judges. He believes that courtroom sharing among active or senior judges is not a good idea in his district and should be discouraged. Types of district judges sharing courtrooms on a regular basis: Senior judges with 10 years or less in senior status. Approximate length of time courtroom sharing has been ongoing at the facility: 5 years. Reported experiences of active and/or senior judges at facilities where courtroom sharing was occurring: Positive experiences: According to the senior district judge, in this facility, there are four judges—a chief judge, a senior judge with 10 years or less in senior status, a bankruptcy judge, and a magistrate judge—and three courtrooms, including a district courtroom, a bankruptcy courtroom, and a magistrate courtroom. The senior judge explained that determining the present and future availability of a courtroom is easily done via computer and that he uses any of the three courtrooms when they are not being used by the other three judges. The senior judge went on to say that, on those few occasions when all three courtrooms were being used at the same time, he has used a video conference room in the basement of the facility as a courtroom with no problems. Negative experiences: The chief judge stated that, first and foremost, the negative impact of courtroom sharing in this facility is minimal. This is due to the cooperation and open communication among all four judges. Any negative impact would tend to be on the “efficient” rather than the “effective” administration of justice. There have been a few times when the chief judge and the senior judge have had trials set for the same week. Most often, one judge was able to hold court in either the bankruptcy or magistrate courtroom. However, there have been occasions where a trial had to be continued because a courtroom was unavailable. Also, in one of the chief judge’s recent trials, the trial was held in a different courtroom each of the 3 days that the trial lasted. This was a major inconvenience for all involved and proved to be somewhat confusing and distressing to the jury. The chief judge also mentioned that the space in the bankruptcy courtroom is extremely confined and has only a makeshift jury box. He said that such an atmosphere tends to take away from the dignity of the proceedings. Also, the lack of courtroom space limits the court’s ability to do mass criminal trial settings. In addition, attempts to bring judges in from around the state to assist with the increasing criminal docket have been impeded because there is no courtroom in which to hold the trials. General comments: The chief judge did not look on the courtroom-sharing situation in the Sioux City facility as a major problem. However, he said that, at times, the court has not been able to operate as efficiently as it could because of the lack of space. With the ever-increasing caseload, it may become more of a problem in the future. The senior judge stated that sharing is not the right word for the use of courtrooms at the Sioux City facility. He said that the procedure for using courtrooms has worked well, and that the other judges have been very gracious and helpful. The senior judge also said that, from his point of view, this arrangement has had no negative impact on the efficient and effective administration of justice. Types of district judges sharing courtrooms on a regular basis: Senior judges with 10 years or less in senior status. Approximate length of time courtroom sharing has been ongoing at the facility: 2 years. Reported experiences of active and/or senior judges at facilities where courtroom sharing was occurring: Positive experiences: None reported. Negative experiences: At this facility, all four of the senior judges have reduced caseloads, but two of the four judges spend a significant amount of time in court, averaging more courtroom time per case than the active judges. Until recently, two of the four senior judges had their own courtrooms, and the remaining two senior judges were sharing the one courtroom that is located on the fourth floor of the building. The sharing arrangement involved the two judges sharing this courtroom on a rotating weekly basis, subject to changes that the judges worked out between themselves. In some instances, the two judges needed the courtroom at the same time, which required one of the two judges to find a courtroom that was vacant elsewhere in the building. This situation has become more difficult to manage because, recently, one of the senior judges who had his own courtroom had to make it available for an active judge who is expected to come on board within the next 3 to 6 months. This senior judge has been relocated and now shares the fourth floor courtroom with the two other senior judges. Thus, at the present time, three of the facility’s four senior judges are sharing courtrooms. The court is now faced with the difficulty of allocating time for the use of the fourth floor courtroom among three senior judges or coming up with another alternative. One alternative involves one of the three senior judges using the first floor courtroom that is still assigned to the fourth senior judge, who has retained his own courtroom. To get to this courtroom, the senior judge who needs a courtroom must walk the length of the building—about one-half of a city block—and take the secured elevator, which is also used for prisoner transport, to the first floor. Then, the senior judge who needs a courtroom has to either walk through the chambers of the fourth senior judge or use a public corridor and enter the first floor courtroom through the attorney’s entrance, an option that creates security issues. Clearly, for the judge to have to go such a long distance from his chambers on the fourth floor to get to a first floor courtroom presents a very awkward and inefficient situation. For example, the judge may want to call counsel into chambers in the middle of a jury trial for a brief conference, which is not an uncommon occurrence. If he is using the first floor courtroom, the judge would either have to take counsel all the way back up to his chambers on the fourth floor, leaving the jury waiting, or use the fourth senior judge’s chambers, thus imposing on one of his colleagues. Another reported difficulty that this facility experienced involved the fourth floor courtroom, which was the facility’s only electronic courtroom. In addition to the senior judges who share this courtroom, active judges also occasionally needed to use the fourth floor courtroom. At the present time, a project is under way to provide electronic evidence presentation capabilities in the facility’s remaining courtrooms. This project is expected to be completed in May 2002 and will eliminate the pressure on the use of the fourth floor courtroom. Occasionally, one of the senior judges sharing the fourth floor courtroom may be involved in an extended and complex trial that takes several weeks to complete. Because one of the other two senior judges sharing the courtroom may need it during his week, the senior judge with the extended trial will have to prevail upon the attorneys to move their exhibits, equipment, and trial materials from one courtroom to another on a different floor. When a complex civil trial involves numerous boxes of documents, devices, equipment, or other nonpaper evidence, the need to move these items can impose a significant burden on the litigants. Additional administrative burdens, such as scheduling and notifying litigants, counsel, and the public of courtroom changes, also occur when proceedings are moved from one courtroom to another. General comments: The chief judge is concerned that courtroom sharing inevitably affects courtroom availability and that judges will be placed in a difficult position when the availability of a courtroom has the potential to affect the administration of justice. He cited two examples of such difficulty—one related to motion hearings and the other to the scheduling of trials. Motion hearings. Judges have the discretion to grant or deny motions to hear oral argument on critical matters relating to a case before them. If they opt to grant the motion for oral argument, they also have the discretion to determine the length of oral argument. To the extent that courtroom sharing imposes constraints on the courtroom time a judge has available to him or her, the administration of justice may be compromised if such constraints are weighed among the factors for denying oral argument or restricting the amount of time the litigants seek to argue their case. Scheduling trials. The Constitution guarantees a right to trial, but a judge can exercise some influence over the parties’ decision to opt for a trial. He or she may urge them to engage in settlement discussions as an alternative to trial. Alternatively, he or she may indicate a strong willingness to accept a plea bargain with the caveat that opting for trial may entail the full weight of the sentencing guidelines if the defendant is convicted on all counts. One factor that has the clear potential to affect how a judge approaches the issue of whether to proceed to trial is courtroom availability. A judge who has unlimited access to a courtroom is likely to be less inclined, other factors being equal, to avoid scheduling a trial than another judge whose courtroom access is limited and whose courtroom calendar already may be crowded with previously scheduled proceedings. In both instances, the chief judge expressed a strong view that courtroom availability should not be a factor in the decision whether to schedule oral argument or whether to proceed with a trial if the judge believes that the substantive elements of the issue or the case at hand otherwise demand it. To the extent that courtroom availability does play into such decisions, serious questions are raised about the effective administration of justice. Types of district judges sharing courtrooms on a regular basis: Active judges. Senior judges with 10 years or less in senior status. Senior judges with more than 10 years in senior status. Approximate length of time courtroom sharing has been ongoing at the facility: Over 10 years. Reported experiences of active and/or senior judges at facilities where courtroom sharing was occurring: Positive experiences: None reported. Negative experiences: Judges prefer having their own courtrooms because resources, such as books and files, can be kept in the courtroom and are always there when needed. A shared courtroom may not be adjacent to chambers and, thus, may restrict easy access to law clerks and equipment for printing transcripts. Courtroom sharing can also affect the ease with which a courtroom’s equipment, such as that used for real-time reporting, can be set up. In addition, with courtroom sharing, it may not be possible to identify and provide advance notice of the specific courtroom where the proceeding is to take place. Without this information, lawyers and the public will be confused about where to go to attend the appropriate proceeding. General comments: All the judges share out of necessity and believe it affects their efficiency and the efficiency of their staffs. They would prefer having their own courtrooms. Also, although the court has not encountered any negative impact on the efficient and effective administration of justice, judges cited speedy trial issues as an area that could pose problems if courtrooms are not readily available. Types of district judges sharing courtrooms on a regular basis: Active judges. Senior judges with 10 years or less in senior status. Senior judges with more than 10 years in senior status. Approximate length of time courtroom sharing has been ongoing at the facility: 1.5 years. Reported experiences of active and/or senior judges at facilities where courtroom sharing was occurring: Positive experiences: None reported. Negative experiences: In a court with a heavy caseload and active trial calendars, the logistics of scheduling can affect the dispensing of justice. For instance, if the courtroom scheduling process gives priority to the district judge with more time in active status, the judge with less time in active status has to wait to set cases or conduct hearings. In addition, it is very difficult to coordinate courtroom use with six judges and four courtrooms, especially when lengthy trials and frequent trials are involved. When scheduling a single courtroom for more than one proceeding, the court staff must be sure that the length of time for one proceeding does not interfere with the scheduling of another proceeding. This is often impossible because proceedings often take longer than counsel estimate, which delays the other proceedings of all of the judges in a courtroom-sharing situation. Another complicating factor is courtroom size. When possible, courtroom size must be taken into consideration when courtrooms are scheduled because the size of a courtroom may be inappropriate for the proceeding. The consequence of courtroom sharing is that multiparty cases, which may be best scheduled for a large courtroom, sometimes have to be convened in a small courtroom because another judge may already be using the larger courtroom. Hearings and trials are sometimes delayed until a courtroom can be located. Difficulty in locating courtroom space can result in hearings not being scheduled and cases decided on written submissions (i.e., motions) instead of valuable oral arguments. Books and furniture for one judge must be moved to a different courtroom so that the materials used to make rulings are readily available for the judge when he or she needs them to make rulings. General comments: Renovations for one of the four district courtrooms have been planned and will cause further problems with courtroom sharing. The Orlando facility will be left with three rather than the current four district courtrooms for six active and senior judges. Courtroom sharing will not work in facilities undergoing renovation. Theoretically, and in an ideal world, courtroom sharing should work. However, in the real world, it leads to delays of justice, interference with managing the caseload, and the erosion of collegiality in a district that has frequent hearings and trials. In such a district, the number and length of trials cannot be controlled, and the number, length, and timing of hearings cannot be predicted. When all the judges in a division carry a substantial caseload and have frequent trials, courtroom sharing becomes a nightmare and defeats the purpose of the court, which is to dispense justice without delay. The court provided an example of the cascading effects of trying to deal with courtroom needs. A district judge was recently moved to a magistrate judge’s courtroom, which left the magistrate judge without a courtroom. There were plans to renovate the facility’s grand jury suite for magistrate judges to use as an alternate courtroom in the event that a district judge needed to use the magistrate judge’s courtroom. However, after the grand jury suite has been renovated, space will be needed for the grand jury to meet. The following are GAO’s comments on AOC’s letter dated March 5, 2002. 1. AOC said that the report confirmed the May 2000 Ernst and Young findings that courtroom-sharing policies resulted in a 20 percent reduction in the number of courtrooms planned for new facilities. Our report does state that 19 courthouse projects expect to have 113 active judges, 90 senior judges, and 158 courtrooms. This equates to about 5 judges for every 4 courtrooms, which would indicate a 20 percent reduction. In the 19 projects, the 44 senior judges expected to share courtrooms will have more than 10 years in senior status and will range in age from 75 to 98 years old at the end of the planning time frame. The 20 percent reduction would appear to be reasonable if it is assumed that, without the current courtroom sharing policies, the judiciary would have planned construction of new trial courtrooms for these senior judges. 2. AOC mentioned that some future construction projects listed as having no plans for courtroom sharing would have been categorized as having courtroom sharing if we had counted visiting and rotating judges. Our analysis focused on identifying active and senior district judges who were expected to be permanently assigned to the 33 future courthouse construction projects for which the judiciary prepared courtroom needs assessment studies. We also focused on identifying senior judges who were expected to share courtrooms on a regular basis at these projects. We did not include visiting and rotating judges in our analysis because their visits are temporary in nature and usually for short periods of time. We clarified our scope and methodology to reflect this point. 3. AOC expressed disappointment that we did not comment on the validity of the judiciary’s courtroom planning assumptions. Our work was not designed to perform a detailed assessment of these assumptions, and, as such, we are not in a position to comment on their validity. 4. AOC pointed out that our discussion of the uncertainties associated with the planning assumptions may unintentionally leave some readers with the impression that the precise timing of events is important, such as predicting exactly when an active judge will take senior status or when a judgeship vacancy will be filled. AOC goes on to say that the timing of these events is immaterial in the long term. As mentioned in the report, the timing of events will have a direct impact on the extent of and when courtroom sharing will occur during the planning period. Our discussion of the planning assumptions was intended to show that there is always some uncertainty associated with any assumptions used in a planning process and that the expected outcomes will be dependent on how well the assumptions materialize.
In recent years, concerns have been raised that new courtrooms continue to be built for district judges, even though existing courtrooms appear to be under used. The judiciary wants to maintain its one-judge, one-courtroom policy because of concerns about the effect of shared courtroom space on judicial administration. The judiciary has not, however, determined whether courtroom sharing may be possible among senior judges--the likeliest candidates for such an arrangement because of their reduced caseloads. Some active and senior judges in areas with a courtroom shortage are currently sharing space. Many of these judges oppose courtroom sharing because they believe that it interferes with the courts business and harms the judicial process. The judiciary plans to have some senior judges share space in future courthouse projects. Significant courtroom sharing appears unlikely in the near future, even among senior judges.
Within the vast portfolio of government owned and leased assets, GSA plays the role of broker and property manager to many civilian agencies of the U.S. government. Although some agencies have independent authority related to real property, many rely on GSA for much of their real property needs. GSA’s federally-owned and leased assets include office and warehouse space and courthouses. GSA charges rent to federal tenant agencies occupying federally-owned and -leased space at rates that are approximately the same as commercial rates for comparable space and services. According to GSA’s most recent State of the Portfolio publication, as of fiscal year 2011, GSA had a total of 374.6 million rentable square feet in its inventory, of which 192.7 million—slightly more half—were leased. In this State of the Portfolio, GSA states that its overarching goal for its portfolio is to maximize the use of its government- owned inventory while reducing the GSA-managed real estate footprint overall. GSA must also follow federal requirements in implementing its leasing program. Federal management regulations specify that when seeking to acquire space for an agency, GSA is to first seek space in government-owned and government-leased buildings. If suitable government-controlled space is unavailable, GSA is to acquire space in an efficient and cost effective manner.Management is responsible for establishing the strategies and policies for GSA’s real property portfolio, while GSA’s 11 regional offices are GSA’s Office of Portfolio generally responsible for conducting day-to-day real property management activities, including leasing, in each of its regions. GSA is required by statute to provide a prospectus, or proposal, for real property leases above the prospectus threshold to House and Senate authorizing committees for their review and approval. The prospectus should include basic information about the space to be leased, including the location, an estimate of the maximum cost to the government of the space, and a statement of rent currently being paid by the government for federal agencies to be housed in the space. While these items are required by law to be in the prospectus, GSA is not prohibited from including other information in the lease prospectuses, and at various times has incorporated additional information. For example, prior to the mid 1990s, GSA routinely included an analysis that compared the long- term costs of leasing versus ownership. At times, GSA includes information on space utilization rates (i.e., the number of usable square feet per person). By statute, GSA is also required to provide authorizing committees a prospectus for each proposed capital project over the prospectus threshold, including both new construction and repair and alteration projects. Typically, prospectuses are drafted in the GSA regional offices and reviewed and approved by GSA’s Office of Portfolio Management. The prospectuses are then reviewed and approved by OMB prior to being provided to congressional authorizing committees— the Senate Committee on Environment and Public Works and the House Committee on Transportation and Infrastructure. GAO-03-122 and GAO-13-283. generally the least expensive ways to meet agencies’ long-term space needs, GSA relied heavily on operating leases to meet new long-term needs because it lacked funds to pursue ownership. Budget scorekeeping rules were established based on the Budget Enforcement Act of 1990. The purpose of these rules is to ensure that the House and Senate Budget Committees, the Congressional Budget Office, and OMB measure the effects of legislation consistently and meet specific legal requirements. They are also used by OMB for determining amounts to be recognized in the budget when an agency signs a contract or enters into a lease. Upfront funding is the best way to ensure recognition of commitments embodied in budgeting decisions and maintain government- wide fiscal control. Under these rules, for a construction or purchase project or a capital lease, the full cost of the project must be recorded in the budget in the year in which the budget authority is to be made available. Operating leases were intended for short-term needs, and thus, under the scorekeeping rules, only the amount needed to cover the first year’s lease payments plus cancellation costs needs to be recorded in the budget. For operating leases funded by GSA’s Federal Buildings Fund (which is self-insuring), only the budget authority needed to cover the annual payments is required to be scored. GSA does not have to include cancellation costs. Thus, an operating lease may potentially appear “cheaper” in the budget than a construction or purchase project, or a capital lease, even though it may cost more over time. Using an operating lease—or successive operating leases—for a long-term space need may result in resource allocation decisions for which the budgeting process may not have considered the full financial commitment over the full length of time the space need exists. Consequently, costly operating leases may be preferred over less-costly alternatives such as major construction or renovation projects that must compete for full funding. A number of OMB-defined criteria must be met for a lease to be considered an operating lease. Among other things, the lease must “score” as an operating lease rather than a capital lease, meaning that the present value of the minimum lease payments over the life of the lease does not exceed 90 percent of the fair market value of the asset at the inception of the lease. Because the scoring seeks to compare a present day fair market value to the value of minimum lease payments made over time, a discount rate must be used in calculating the total cost of the minimum lease payments over the lease term. GSA uses the discount rates determined annually by OMB, rates that vary depending on the length of time being considered in the calculation. For example, if the fair market value of an asset is $1 million, and the lease has a minimum annual lease payment of $100,000 with a term of 7 years, then the calculation of the total value of the minimum lease payments over the 7 years (applying OMB’s 2012 7-year discount rate of 0.7 percent) would be about $680,000. This total value of $680,000 is 68 percent of the fair market value of $1 million—which, at less than 90 percent, would result in the lease being scored as an operating lease. However, if the lease term was 20 years, the calculation of the total value of the minimum lease payments of $100,000 over the 20 years (applying OMB’s 2012 20-year discount rate of 1.7 percent) would be about $1.7 million. Since $1.7 million is more than 90 percent of $1 million, the score would exceed the 90 percent threshold and result in the lease being scored as a capital lease. If the project scores as a capital lease, the net present value of the total cost of the lease is recorded in the budget in the year the lease is entered into by the federal government. In the above examples, the 7-year operating lease with a minimum annual rent of $100,000 would result in $100,000 being scored against GSA’s budget authority for each of the next 7 years. For the capital lease, the net present value of the total lease costs (about $1.7 million) would be scored against GSA’s fiscal year budget authority in the year lease payments began. Over the years, we have reported on numerous examples of operating leases that GSA and the U.S. Postal Service entered into even though they were more costly over time than ownership. For example, in 2008, we found that in 10 GSA and U.S. Postal Service leases, decisions to lease space that would be more cost-effective to own were driven by the limited availability of capital for building ownership and other considerations, such as operational efficiency and security. We found that for four of the seven GSA leases GAO analyzed, leasing was more costly over time than construction—by an estimated $83.3 million over 30 years. At that time, we stated that while the administration had made progress in addressing long-standing real property problems, efforts to address the leasing challenge had been limited. Some alternative approaches had been discussed by various stakeholders, such as the President’s Commission to Study Capital Budgeting and us, including the approach of scoring operating leases the same as capital leases, which would make them comparable in the budget to direct federal ownership, but none had been implemented. In 2008, we recommended that OMB, in conjunction with other stakeholders, develop a strategy to reduce agencies’ reliance on leased space for long-term needs when ownership would be less costly. OMB generally agreed with our report and recommendation and stated that it would be useful to consider how to identify instances where operating leases are most likely to be to the government’s long-term financial detriment. While OMB did not develop the strategy we described, OMB staff said that they have emphasized in guidance issued over the past several years that agencies should reduce space needs, including for leased space, through increases in space efficiency. Presidential Memorandum—Disposing of Unneeded Federal Real Estate—Increasing Sale Proceeds, Cutting Operations Costs, and Improving Energy Efficiency, 75 Fed. Reg. 33987 (June 16, 2010). offset through consolidation, co-location, or disposal of space from the inventory of that agency. This policy became known as “freeze the footprint.” In March 2013, OMB issued a memorandum establishing implementation procedures for its “freeze the footprint” policy. This memorandum clarified that agencies were not to increase the total square footage of their domestic office and warehouse inventory compared to a fiscal year 2012 baseline. It also directed agencies to use various strategies to accomplish this goal, including consulting with GSA about how to use technology and space management to consolidate, increase occupancy rates in facilities, and eliminate lease arrangements that are not cost or space effective. As of November 2012, GSA’s 218 high-value leases had a total net annual rent of over $1.5 billion—36 percent of the approximately $4.2 billion net annual rent of GSA’s leased portfolio. In recent years, GSA has taken steps to reduce the costs of its high-value leased portfolio. For example, GSA has helped agencies reduce their space needs and consolidate space as high-value leases expire. Challenges related to reducing lease costs include a lack of funding to renovate space and delays that can result in costly short-term extensions or “holdover” situations, in which the agency remains in the space past the lease’s expiration date without a new lease agreement. As of November 2012, GSA’s 218 high-value leases represented only about 3 percent of the total number of GSA leases, yet made up about one-third of GSA’s leased portfolio in terms of cost and size. Together, these 218 leases have a net annual rent of over $1.5 billion, or 36 percent of the roughly $4.2 billion total net annual rent of GSA’s leased portfolio. Similarly, the 218 leases include over 54 million rentable square feet, or almost 30 percent of the roughly 188 million rentable square feet in GSA’s leased portfolio. (See fig. 1.) The average size of the 218 leases is 249,000 rentable square feet, ranging from about 57,000 rentable square feet for a lease for the Department of Justice in Miami, Florida, to the largest and most expensive of the high-value leases—a Department of Commerce lease in Alexandria, Virginia, that is about 2.4 million rentable square feet and has a net annual rent of $60 million. The average net annual rent for the high- value leases is about $7 million. About 60 percent of the high-value leases are located in GSA’s National Capital Region—which includes Washington, D.C., and portions of Northern Virginia and suburban Maryland. The rest are spread throughout the United States, with concentrations in other major urban areas such as New York, Seattle, and Dallas. (See fig.2.) High-value leases house a microcosm of the federal tenants for whom GSA provides leased space. The tenants of these leases include 41 federal agencies and departments. The Departments of Justice, Treasury, and Commerce have the largest amount of space among the 218 high- value-leases. For example, the high-value leases include 9.4 million rentable square feet for the Department of Justice, representing 17 percent of the over 54 million rentable square feet in the 218 leases. (See fig. 3). Most of the high-value leases have lease terms of at least 10 years. The majority—55 percent—have lease terms of more than 10 years, and 31 percent, or 68 leases, have lease terms of 20 years. While lease expiration dates ranged from 2012 through 2032, over 60 percent of the leases will expire by 2018. For instance, FAA officials told us that they have several high-value leases in their Southwest, Northwest Mountain, and Southern regions—providing space for about 1,500 employees at each location—that expire from 2013 through 2017. Employees at these regional headquarters work for various FAA lines of business, such as Aircraft Certification, Flight Standards, and Air Traffic Organization. According to GSA’s leasing program officials, these upcoming lease expirations provide opportunities to help meet administration goals for cost savings through reducing costly leases—via space consolidations or moving from leased to owned space—although as discussed later in this report, GSA and tenant agencies may face funding and other challenges in doing so. GSA has taken steps to reduce the costs of its high-value leased portfolio in recent years in line with administration goals to reduce real property holdings. First, GSA officials stated that, as required by regulation, GSA always looks first to federally owned or existing leased space to fill space needs. GSA officials stated that all of GSA’s high-value leases represent space needs that could not be accommodated in existing federally owned space and that they review the federally owned inventory as high-value leases expire to see if there is any new potential to move the federal tenant into an owned situation. For example, a regional GSA official stated that GSA currently has 127,000 rentable square feet of vacant space in a Los Angeles federal building. Among other options, GSA is considering the potential to move the Army Corps of Engineers in Los Angeles from a high-value lease with a net annual rent of $3.2 million in which the Army Corps is occupying about 118,000 rentable square feet into this federal space when the lease expires in 2016. However, according to GSA officials, GSA often does not have large enough vacant spaces in federally owned buildings to meet agencies’ high-value space needs—particularly without major, costly renovations. Another way in which GSA has worked in recent years to reduce costs of high-value leases is through efforts to help agencies reduce the amount of space they occupy. Among other things, GSA has worked with OMB and agencies to reduce the amount of space per employee (the space utilization rate) in recently submitted lease prospectuses—including at times, revising draft prospectuses to decrease the space utilization rate and thereby the overall amount of space requested. For example, OMB officials stated that they noticed that a recent draft prospectus for a law enforcement agency’s field office—an environment in which many staff are away from their desks much of the time—had the same proposed utilization rate as a space request from an administrative agency in Washington, D.C., where most employees work at their desks. OMB and GSA worked with the law enforcement agency to come to an agreement on a reduced utilization rate before submitting the prospectus to the congressional authorizing committees. Furthermore, GSA officials stated that GSA and tenant agencies have worked in recent years to reduce the square footage of new leases even when a larger amount of space had already been approved through the prospectus process. For example, in 2008, GSA submitted a prospectus for a new FAA lease to consolidate FAA’s Northwest Mountain Region headquarters in the Seattle suburb of Renton, Washington, from multiple leases into one lease of up to 519,000 rentable square feet. The prospectus was approved by Senate and House authorizing committees in 2008 and 2009, respectively, prior to the 2010 presidential memorandum and 2013 OMB memorandum focusing on reducing space needs. According to GSA and FAA officials, the subsequent push to reduce space needs led GSA and FAA to reduce the lease proposal by about 40 percent. According to agency officials, for the most part, FAA plans to adapt its needs to this smaller space by improving space utilization through greater use of open office space and increased teleworking, although it will also maintain some additional small warehouse leases it had initially hoped to consolidate. GSA worked closely with FAA to help FAA plan for improved space utilization— including having FAA staff tour GSA’s regional and local offices in the area, both of which have been redesigned with open floor plans that can accommodate more staff per square foot. (See fig. 4.) According to GSA officials, another way that GSA has worked to reduce lease costs is through improved customer real-property portfolio planning. According to GSA’s fiscal year 2011 annual performance report, customer portfolio plans have been completed for three of GSA’s top 20 customers, including the Department of Health and Human Services (HHS), and GSA expects to have an additional 9 completed by the end of fiscal year 2014. The customer portfolio plans attempt to optimize real property portfolios by agency, including cost savings and space reductions. For example, the September 2012 HHS customer portfolio plan describes GSA’s efforts to optimize its suburban Maryland portfolio for HHS, which includes a combination of high-value and smaller leases. According to the plan, GSA negotiated for more than 1.2 million rentable square feet in suburban Maryland with an estimated annual lease cost reduction for National Institutes of Health (an Operating Division within HHS) of $4.4 million. The plan also describes planned consolidations into, and improvements to, a high-value suburban Maryland lease that would improve HHS’s utilization rate and increase the number of staff in the space by about 50 percent—and that has estimated savings of over $10 million in annual rent through lease terminations. Future opportunities described in the customer portfolio plan include efforts to implement these types of consolidations and cost savings in other properties in HHS’s portfolio. GSA also faces challenges related to reducing lease costs by shrinking the leased footprint through changes in space allocated to individuals— i.e., through reducing the number of square feet per person. Most of these challenges stem from broad funding issues faced by agencies government-wide, as agencies with shrinking budgets struggle to determine how to fund costs associated with moving or retrofitting space in order to improve space efficiency. In most cases, as the expiration date for an existing lease approaches, GSA issues a call for competitive bids for a new lease, for which the existing lessor can compete along with other lessors. Because a goal of the competitive process is for GSA to get the best deal for the new lease, agencies must commit to moving after their current lease expires if GSA determines that a different location will be less expensive over the term of the new lease. Agencies therefore must budget for potential moving costs in their annual budgets. Moving costs may include funds for the physical move, telecommunication network services and other technology needs, security features, new furniture and cubicle divisions, relocation management, and special consulting services. Even if an agency remains in the same location, reducing space by increasing space efficiency is likely to incur costs from technological and material build outs, such as new technology, furniture and cubicle walls. These costs must be paid up front from the agency’s annual budget rather than being rolled into the monthly lease costs. According to GSA officials, in the past few years, agency uncertainty about future needs and a lack of funding to pay for moving costs or costs associated with reducing the square feet per person in the same location have at times made it difficult for GSA to get a commitment from an agency for a future space requirement. According to GSA officials, GSA typically begins planning for the next space need for a high-value lease 3 to 5 years ahead of the expiration date. They noted that this should provide GSA sufficient time in which to work with the tenant agency to understand its future space needs, draft the prospectus, have the prospectus reviewed and approved by OMB and congressional authorizing committees, and enter into a new lease. However, in recent years, agencies’ delays due to a lack of funding to commit to a new space requirement sometimes led to the need for short-term extensions or—if the landlord is unwilling to meet GSA’s negotiations—”holdovers.” According to GSA, holdovers are risky for the government because the government continues to occupy space to which it has no contractual rights. According to several private sector officials we spoke to, holdovers are problematic for the lessor because often a lessor’s financing agreement for the building’s mortgage depends upon having a signed lease, and the uncertainty of having a lease in holdover status can make it difficult or impossible to secure needed financing for the building. Several real estate experts we spoke with stated that if a private tenant goes into holdover status, the tenant must pay a substantial rent increase—such as a 200 percent increase in the rental rate during the period of time the lease is in holdover status—as a penalty. Typically, the federal government does not pay such a penalty. In several specific cases we inquired about, GSA was continuing to pay the rent as stated in the expired lease without penalties. We found that 14, or 6 percent, of the 218 high-value leases were in holdover status as of November 2012. GSA attempts to avoid holdovers by getting short-term extensions in place when it cannot move forward with a new long-term lease. However, at times, GSA and the lessor cannot reach agreement on a short-term extension, and the lease enters into holdover status. The three leases among our 12 case studies that were in holdover status illustrate some of the interrelated challenges that can lead to holdovers. While all of the tenant agencies in these three cases plan to remain in the same location and GSA is paying the same rental rate as it did during the lease term, various combinations of factors—many of them external to GSA— precipitated and exacerbate the holdovers. For example, in one case, the lease initially entered holdover because GSA was awaiting congressional approval of a prospectus, but remained in holdover because of protracted negotiations with the landlord over lease term and price. In another case, it appears that GSA’s attempt to execute separate leases for agencies that had once been in a combined lease was a factor in the holdover, while difficult negotiations with the lessor over the terms of the new lease lengthened the period of time the lease remained in holdover status. GSA officials stated that the current environment of reduced funding government-wide, and expectations that agencies will work to reduce space needs without necessarily having the funding to reconfigure their space, has resulted in a situation with no easy solutions. In this challenging environment, GSA, in its role as the manager of real property for many civilian agencies, has the opportunity to set forth a vision and strategy for federal real property that encompasses the needs of multiple federal agencies and balances real property priorities across the civilian federal government—a vision that could help mitigate these complex challenges over the long-term. The next section of this report explores GSA’s long-term capital planning approach for its high-value leased portfolio, which could be used to communicate such a vision to federal decision makers. Although GSA officials stated that for most high-value leases, constructing federally-owned space would be more cost effective over time than continuing to lease, GSA’s capital planning approach lacks a strategic focus that addresses its reliance on high-value leases. We identified three leading practices that characterize sound capital investment decision making and pertain to GSA’s high-value leased portfolio: (1) alternatives evaluation, (2) project prioritization, and (3) creating a long-term capital plan. We found that GSA’s lease prospectuses lack transparency on key information that would help decision makers understand the extent to which these high-value leases are the best alternatives to meet agencies’ long-term space needs. GSA also has not systematically prioritized which high-value leases have the most cost-saving potential if they were instead pursued as capital projects. Furthermore, GSA has not incorporated those high-value leases that should be the highest priority for ownership into a long-term capital plan. According to our work on leading practices in capital decision making, vision and leadership are crucial to the success of leading organizations’ capital-planning efforts. Many headquarters and regional GSA officials, including assistant commissioners in GSA’s leasing program, stated that the optimal way to manage GSA’s high-value lease portfolio in line with its long-term portfolio goals would be to transfer many of the housing needs that are currently in high-value leases into federally owned property. This transfer could be accomplished either by shifting personnel into existing federally owned space—a shift that could, however, often require major renovations—or by purchasing or constructing new space. GSA officials stated that some high-value leases represent short-term or unstable space needs—such as a 5-year lease providing space for an agency that plans to move into a federally-owned space when renovations are completed. In such cases, leasing is the most appropriate solution. However, officials concurred that most of the high-value leases consist of long-term, relatively stable, mission-central needs for federal agencies— space needs that in many cases are likely to exist for longer than 20 years—and that in these cases, ownership is the most cost-effective solution over time. However, GSA officials stated that limited availability of existing federal space and funding for its capital program have given GSA no choice but to continue to lease space for these government needs—including some space needs that have existed for the past 40 years or more and have been met by leasing through multiple competitive procurements. Although many of the high-value leases are candidates for ownership, GSA does not include any analysis of such alternatives in its lease prospectuses. According to capital-planning principles, alternatives evaluation should be done for all major capital assets, including leases. The lease prospectuses also lack other information that could help decision makers consider the wisdom of continuing to lease and better inform their decision making. In addition, we found that nine high-value leases did not go through the prospectus process. OMB’s Circular A-94 requires that all leases of capital assets must be justified as preferable to direct government purchase and ownership; for major acquisitions, this should be done through an analysis of the costs over time of leasing versus owning the asset. The purpose of this requirement is to promote efficient resource allocation through well- informed decision-making by the federal government. Because the prospectus is to be reviewed and approved by both OMB and congressional authorizing committees prior to GSA’s entering into the lease, it is a key document for communicating GSA’s decision-making process. GSA is not required by law to include the results of these analyses in the prospectus; however, according to GSA officials, GSA includes the results of an alternatives analysis in its prospectuses for capital construction and renovation projects but does not do so in its prospectuses for leases. In the 1980s and early 1990s, GSA did include such an analysis in its lease prospectuses. However, in the mid-1990s, according to GSA officials, these analyses were discontinued for lease prospectuses in the context of the limited availability of funding for most construction or purchase alternatives to leasing. OMB staff stated that they advised GSA officials to stop including the results of a lease versus purchase analysis in lease prospectuses because OMB had determined that the scoring analysis—in which, for an operating lease, it is shown that the present value of the minimum lease payments over the life of the lease does not exceed 90 percent of the fair market value of the asset at the inception of the lease—was sufficient information to demonstrate that leasing was the most cost effective option over the term of the lease. GSA officials stated that there was also a sense that in an environment of scarce capital resources for purchase or construction, there was no benefit in performing a lease versus purchase analysis—which often showed that ownership would be more cost effective than leasing over 30 years—since GSA did not expect to receive funding for capital construction or acquisition. GSA officials stated that in light of a significant decline in funding for new federal construction, even if a lease versus purchase analysis showed that ownership was less expensive, the lack of availability of funding for construction meant that GSA considered this a non-viable alternative. The decision to halt a formal lease versus purchase alternatives analysis for high-value leases has limited the transparency of the prospectus process. First, the lack of a lease versus purchase analysis in the prospectus means that government decision makers do not have information on the extent to which the proposed lease is more costly than owning over the long-term. When GSA did perform the 30-year-net- present-value analysis for lease prospectuses, there were times when the analysis showed that leasing was the most cost effective option. For example, in our case studies for this review, 30-year present value analyses were completed for two 20-year FAA leases, which became effective in 1989 and 1992. In one instance, the analysis estimated it was more cost effective to lease (a $3.3 million savings in Washington state). The other analysis estimated that it was more cost effective to own (a $2.1 million savings in Texas). Of the 218 leases in our review, 27 had prospectuses that included a 30-year-net present value analysis of leasing versus owning. Overall, across these 27 prospectuses, we found that over 30 years, the government would spend an estimated additional $866 million by leasing instead of owning, or approximately 18 percent of the total expected cost. While these prospectuses were all developed from 1986 through 1993, when GSA was regularly including such analyses in the prospectuses, due in part to some gaps of several years between the prospectus and the date the related lease became effective, the related leases have expiration dates ranging from 2012 to 2027. Without such information on more recently proposed high-value leases, GSA and federal decision makers, including Congress, lack information on the long-term cost consequences of decisions to lease rather than own that were proposed after GSA stopped including such an analysis in prospectuses. To perform this scoring analysis, GSA uses established criteria to determine the fair market value of the asset at the inception of the lease and then compares this amount to annual lease payments multiplied by the number of years in the lease term. term—such as 5 or 10 years—for a high-value lease for operational flexibility, such as when GSA is working on renovating federal space that it plans to move the personnel occupying the lease into when the renovations are complete. However, GSA officials stated that at times, GSA has had to negotiate shorter lease terms primarily because that will ensure that the lease will score as an operating lease—regardless of how long the agency expects to need the space. As a result, while the lease term established represents the legal responsibility of the government to pay for the lease, it may not reflect the length of the need for the space or therefore the true cost of long-term leasing. Furthermore, some GSA and private sector officials stated that, at times, limiting the length of a lease term to ensure that the lease will score as an operating lease can be costly. For example, because lessors prefer the certainty of a long-term lease, they may be willing to negotiate lower annual payments for longer terms. In another example, in cases when the commercial real estate market is struggling, GSA may not be able to take advantage of economic conditions by locking in a low annual rent for as long as possible. In addition to the limitations of the scoring analysis for analyzing whether leasing is the best alternative, prospectuses are developed as a “snapshot” in time, covering one lease term, and do not indicate the extent to which the agency has had a history in the current location or the expected duration of the agency’s space need. As a result, decision makers have no context with which to make fully informed decisions regarding the most cost effective way for the space need to be addressed. According to GSA officials, when there is a short-term need for a high-value lease, GSA may know how long the agency will need the space, and in those cases, may have included this information in the prospectus. However, GSA officials stated that in many cases neither GSA nor the agency knows how long the agency will need the space, as changes to an agency’s mission and technology over time can affect space needs. Nevertheless, in our review, we found that 9 of our 12 case study leases included space for long-term or mission critical space needs for tenant agencies. Some of the tenant agencies in these leases have been housed in successive operating leases far longer than GSA’s maximum 20-year lease term—situations that would lend themselves to an analysis of the extent to which it would be more cost effective for the government to own rather than lease. For example: One high-value lease we examined provides space for the Environmental Protection Agency’s (EPA) 10th Region headquarters in Seattle, Washington. EPA was the first tenant in the building when it was constructed over 40 years ago. At the time of our review, GSA was negotiating a new 10-year lease in the same space. However, the prospectus provides no indication of how long EPA has been in the building or EPA’s expected future need for the space. The new lease, if completed, will therefore result in 50 years of continuous occupation of this leased space in downtown Seattle—with no analysis of the cost implications of doing so and no recent consideration of the alternative of constructing owned space. According to GSA officials, there is no federally-owned vacant space in Seattle that would meet EPA’s needs. Another high-value lease we examined provides space for HHS in Rockville, Maryland. According to GSA officials, the building was built for HHS in 1970 and has been continuously occupied by HHS for over 40 years. A new 15-year lease for the same building begins in July 2015 and will expire in 2030—at which point HHS will have occupied the same building for close to 60 years. The new lease results in a space reduction at that location of about 28 percent. In this case, the lessor’s most recent proposal, which GSA selected for the new lease through a competitive bid process, included a complete renovation of the existing building. GSA officials stated that the financing of renovations in a leased building is the lessor’s responsibility. According to several private sector officials, one advantage of GSA’s leasing rather than owning is that the private sector can often finance major renovations for which the public sector would have difficulty securing funding. In the case of the HHS building, the renovation is currently ongoing with plans to be completed in 2016. According to the lessor, in this case, the major renovations were financed based on the strength and security of the lessor having a long-term government lease. The challenge of funding renovations of federally owned space is something we have discussed in previous work. However, GSA officials noted that generally, a lessor’s investments into building renovations—including financing costs—are passed on to the leaseholder through the cost of the monthly lease payments over time—and the financing costs are likely to be higher than the costs to the government of borrowing money from the Treasury. Considerations of such trade-offs could be factored into an analysis of whether the government should own or lease such high-value space needs, but the prospectus for this lease did not consider alternatives to continuing to lease this long-term space need. This lease was authorized in legislation and so was not accompanied by a prospectus. in Rosslyn. The building was constructed to State’s security specifications, including a hardened lobby and exterior and extra security features in the parking garage. In addition, according to State officials, State invested an additional $80 to $100 million in secure technology and conference rooms with technologically advanced security features. However, GSA signed this as a 10 year lease, mostly, according to GSA officials, so that it would score as an operating lease. In this case, a 10 year lease scored at 83 percent of the fair market value—and, according to GSA officials, was the longest term GSA could get for the lease without risking that the lease would score as a capital lease. As the 10-year lease’s expiration came up in 2012, GSA initially opened the competition for this requirement to a wider geographical area than Rosslyn. However, due to State’s concerns about potentially having to move farther away from State’s headquarters—which State sees as compromising to its mission—plus the difficulty and expense of replicating all of the security-related technology invested in this building, State asked that the competition be canceled. In June 2013, GSA renewed this lease for 5 years, with an option to purchase the building at a market rate after the 3rd year. According to State officials, the 5-year extension with purchase option will provide the government time to find and evaluate government-owned solutions to this long-term requirement. GSA officials agreed that the government should consider ownership when a large investment is required to move or replicate the current space. In another example, a high-value lease for two smaller agency headquarters—the Federal Maritime Commission and the National Archives and Records Administration, in Washington, D.C., took up much of an entirely federally-leased building across the street from the Government Printing Office. As the January 2013 expiration date for this lease approached, GSA decided that going forward, each agency would have its own lease, each of which would be below the prospectus threshold. However, both agencies ended up staying in the same building. According to GSA officials, because the National Archives and Records Administration communicates remotely via laser links with the Government Printing Office in order to complete a mission-required activity of daily printing of The Federal Register, it needed to remain within a half mile of the Government Printing Office. When the lease came up for expiration, GSA did a competitive bidding process within a very narrowly defined geographical area that resulted in no satisfactory offers from anyone other than the current landlord— resulting in difficult renegotiations with the landlord to remain in the same building at a rate GSA considered acceptable. Another element that represents a risk for the government and may be relevant in considering which of the high-value leases would be the most cost-effective to target for ownership but that is not explored in the prospectus process is the extent to which GSA is leasing entire buildings. We found that almost half of GSA’s high-value leases are either for an entire building or almost an entire building, or are in buildings where GSA has other leases so that GSA is effectively leasing the entire building. Specifically, 48 percent of these leases are in buildings that are 90 percent or more federally leased, and almost 60 percent are in buildings that are at least 75 percent federally leased. For example, GSA currently leases an entire building of about 300,000 rentable square feet in Ft. Worth, Texas, for two agencies—FAA occupies the majority of the space and the FBI occupies the rest. Both agencies have occupied this space for the past 20 years. As the expiration date for this lease approached, FAA indicated a requirement for increased square footage in order to consolidate staff into this lease from other nearby leased locations. As a result, GSA, through a competitive bidding process, has selected a developer to build a facility to meet FAA’s needs that GSA has agreed to lease for 20 years and that FAA plans to fully occupy. According to GSA officials, the prospectuses typically provide information only on the space needs for the particular lease (or in some case, leases) being proposed in the prospectus, and do not include information on any other leases that may be ongoing in the same building, or on the percentage of the entire building that GSA is leasing. Without this disclosure, decision makers have no way to fully assess the investment GSA is proposing. For example, for one of our case study leases in Washington, D.C., the prospectus proposed a replacement lease for up to 294,000 rentable square feet for several agencies currently located in a number of leases in one building—without mentioning that another GSA lease was also in that same building. The prospectus also did not include the information that together these two leases covered about 65 percent of the entire building, which is in a prime location in Washington, D.C., near the White House. According to our analysis of GSA data, in 6 other cases, high-value leases are in a building with either one or two other GSA leases so that altogether GSA’s leases encompass over 90 percent of the building’s occupancy. The lack of this contextual information in the prospectus further reduces the transparency with which GSA presents its leasing portfolio to government decision makers. Most of the 218 high-value leases had a prospectus or separate legislative authority indicating congressional committees’ approval of the lease. However, for 9 of these leases—involving a total net annual rent of about $50.2 million—GSA officials could not provide documentation showing congressional committees’ approval or legislative authority. According to GSA officials, in three of these cases, GSA mistakenly did not provide a prospectus to Congress. For example, one of our case study leases, a 146,000 square foot lease in Los Angeles that houses the U.S. Army Corps of Engineers, has a 10-year term (from May 3, 2006 through May 2, 2016) with a net annual rent of $3.2 million, thus over the fiscal year 2012 prospectus threshold of $2.79 million. However, GSA did not submit a prospectus for this project prior to the beginning of the lease term or otherwise notify or obtain approval from the congressional authorizing committees. According to GSA officials, GSA’s regional office in San Francisco, California, did not take the proper steps in analyzing lease costs to determine whether a prospectus was needed for the project. Since 2006, GSA headquarters has substantially revised and standardized its guidance on prospectus-level leases, a revision that GSA officials in three regions told us was helpful in preventing such mistakes. In four cases, the lease started below the prospectus threshold, but over time new space was added in supplemental lease agreements that put the lease over the prospectus threshold. GSA officials stated that this occurred due to subsequent expansion to meet unforeseen agency needs. For example, this occurred with three of the Washington, D.C., metro area high-value leases. When this occurs, GSA officials stated that generally, GSA does not go back to Congress with a prospectus for approval until the lease approaches expiration. At that point, if the continuing space need is over the prospectus threshold, GSA will provide a prospectus to Congress. The result of these situations is a further limitation on the transparency of the prospectus process in providing decision makers information on the full scope of GSA’s high-value leased portfolio—information that could be used to analyze the extent to which leasing is the best alternative in these cases. Not submitting a prospectus for congressional approval hinders the ability of the appropriate congressional committees to fulfill their oversight responsibilities for all prospectus-level leases. According to GSA officials, although violations of process and policy are relatively rare, GSA plans to enhance its internal controls to reduce instances of prospectus-level leases not going through the proper process in the future. Table 1 provides a summary of the high- value leases we identified that did not have a prospectus or other legislative approval. Without evaluating alternatives to continuing to lease its high-value leases, GSA does not have information that it could use to prioritize potential capital projects for those space needs currently in high-value leases for which it would most benefit the federal government to own rather than lease. According to our and OMB’s analysis of leading capital planning practices, leading organizations have processes in which proposed capital investments are compared to one another to create a portfolio of major assets ranked in priority order. In our July 2012 report on GSA’s Federal Buildings Fund, we found that GSA’s project prioritization process for its capital program partially met leading practices but that it lacked transparency in that we were unable to determine how GSA used its criteria to prioritize major projects. We recommended that GSA document in its annual budget request to OMB how it uses its prioritization criteria to generate its annual and 5-year lists of prioritized projects to ensure that Congress understands the rationale behind the prioritized project lists and that GSA is maximizing return on Federal Buildings Fund investments. According to GSA officials, GSA is currently working to develop a document that will accompany its new capital plan and clarify its prioritization process for decision makers. GSA officials told us that, with regard to high-value leases, GSA addresses its portfolio on an asset-by-asset basis. While it has not performed a systematic analysis to determine which space needs represented by high-value leases would be most beneficial to move to federal ownership—or prioritized these space needs for ownership accordingly—in some cases, GSA has turned leased space into federally- owned space. For example, GSA determined it would be more cost- effective to purchase rather than continue leasing the Columbia Plaza Building, which is occupied by State in Washington, D.C. A purchase option had been included in the 1992 lease at the request of Congress, and when exercised, it allowed GSA to purchase the building for about $100 million, well below the 2006 appraised value of $190 million. In 2009, Congress made funds available from the Federal Buildings Fund for the purchase. GSA officials stated that pursuing this purchase was clearly beneficial to the federal government, since the high-priority tenant was already in residence and committed to a long-term occupancy, leasing elsewhere would be costly, and the building would immediately become an income-generating asset. These types of considerations could be appropriate criteria for GSA to use in considering which of the agency space needs currently occupying high-value leases should be prioritized for a federally-owned solution. However, without a portfolio approach in which high-value leases are systematically evaluated to determine which space needs should be the highest priority for transferring to federal ownership, GSA has no documentation to help it or government decision- makers determine how best to invest limited capital funds. Two elements further limit the vision and comprehensiveness of GSA’s strategic capital planning process—GSA’s lack of consideration in its capital-planning process of the extent to which the existing high-value leases should be targeted for ownership and the lack of criteria to analyze and prioritize these projects among the other projects GSA considers for capital funding. Without a transparent prioritization of all major projects that would be more cost effective to own than to lease over the long term, GSA’s analysis of its portfolio is incomplete and is lacking core information to help decision makers work with GSA to manage its portfolio in a cost effective manner. Both OMB and GAO guidance emphasize the importance of developing a long-term capital plan to guide the implementation of organizational goals and state that making informed capital investment decisions requires full information about an agency’s current and long-term needs, alternative courses of action, and how potential projects compare among each other. Without having taken these steps for high-value leases, GSA has not incorporated the high-value leases that should be the highest priority for ownership into a long-term capital plan. Since 1991, we have reported that GSA would benefit from a comprehensive capital plan, stating that a capital plan could provide information on the potential benefits and cost savings of competing capital projects and provide a better context for making capital investment decisions. Recently, in our 2012 report on GSA’s Federal Buildings Fund, we found that GSA’s long-term capital plan minimally conformed to leading practices in that it did not incorporate the following: a baseline needs assessment including where there might be gaps in what GSA’s real property portfolio provides; an explanation of why projects selected are the best alternative; and alternatives to meeting project goals. Instead, among other things, in July 2012, we found that GSA did not rank all of its proposed projects together—instead ranking courthouse and land port-of-entry projects in their own list—making it difficult to compare GSA’s prioritization of projects across its portfolio. We found that a comprehensive long-term capital plan could further GSA’s ability to make informed choices about long-term investment decisions and recommended that GSA (1) document in its budget submission how it prioritizes capital investments and (2) develop and annually submit a 5- year long-term capital plan to OMB and Congress. GSA agreed with our recommendations. As of May 2013, GSA officials stated that GSA was undertaking a major revision of its capital plan to implement these recommendations. Just as GSA’s current capital plan does not prioritize all of its proposed capital projects in the same list or clearly explain why projects selected are the best alternative, GSA does not have a documented analysis of which, if any, of its high-value leases should be targeted for ownership and how such ownership might compare cost-wise to other capital projects it has included in its capital plan or budget request. The leasing- related strategic documents that GSA provided to us focus on optimizing the portfolio at the agency level through GSA’s recent customer portfolio planning effort. While this effort may improve GSA’s leased portfolio, it does not allow decision makers to compare the financial implications of GSA’s high-value leases portfolio-wide—across agencies and against capital projects. This lack of information on the long-term consequences of high-value leases could inadvertently contribute to the federal government’s overspending on agencies’ long-term space needs—even as the federal government tries to trim costs through reducing its leased footprint. In contrast, a strategic vision for these leases that incorporates leading practices of capital decision making could better position the federal government to save money over time. Such a vision could take into account agencies’ current efforts to reduce space needs. For example, in considering the potential to move an agency division currently occupying leased space into federally owned space, GSA could incorporate into its analysis the extent to which additional leases, particularly smaller leases for the same agency in the same area, could be brought into newly constructed facilities if space needs continued to contract over time. Increased transparency could also promote collaboration with decision makers and better position GSA and tenant agencies to address funding and other challenges that are impeding progress in GSA’s efforts to reduce the federal real property footprint through improved space utilization as leases expire. According to GSA officials, GSA would welcome the opportunity to convert some of its high- value leases to federal ownership, stating that its reliance on costly operating leases has increased in recent years as a result of constraints on the Federal Buildings Fund and the budget scoring of leases. By focusing on cost savings through limiting the federal real property footprint, GSA’s efforts to proactively work with federal agencies to consolidate high-value and smaller leases as they expire, to move some high-value leases into government-owned space, and to help agencies increase space efficiency through such efforts as more open floor plans and increased telework have had some positive results. GSA’s work to optimize federal agency real property portfolios through better planning is also a step in a right direction. So far, these efforts are done for the most part on a lease-by-lease or agency-by-agency basis. Our work on leading practices in capital decision making has emphasized that vision and leadership are crucial to the success of leading organizations. GSA, in its role as manager of real property for many civilian federal agencies, has the potential to set a vision and strategy for federal real property that addresses needs and priorities across federal agencies. However, with regard to high-value leases, which include space needs for over 40 federal agencies and departments and represent about one-third of GSA’s total net annual rent for leased facilities, GSA lacks a strategic focus for determining which should be converted to ownership. Indeed, as agencies work to shrink their footprint through increased space efficiency and telework, it could be an ideal time to make carefully targeted investments into owned facilities that would help move the federal government out of long-term, high-value leases and into efficient, federally owned space with lower long-term costs. However, the lack of transparency in GSA’s lease prospectuses means that Congress may not fully understand the length of an agency’s space needs and the costs of continuing to handle these long-term needs through leasing rather than ownership. In addition, if the transparency of the prospectuses is improved, Congress would still be considering each leasing action separately; to strategically manage these leases, it is important to consider them in the context of GSA’s entire real property portfolio, whether at the regional level for space planning or the national level for considering where to invest scarce federal funds. GSA lacks analysis of the effect of these long-term leases on its portfolio and on the tenant agency in line with capital-planning principles—and it therefore cannot share this information with Congress, for example, by incorporating proposals for those space needs currently housed in high-value leases for which it would be most beneficial to transfer to an ownership solution into the capital plan we recommended that GSA develop in 2012. Moreover, cases in which high-value leases lack a prospectus further reduce the transparency of GSA’s full portfolio. Although these leases have been in effect for several years, it is nonetheless important that information on them be submitted to the appropriate committees to maintain GSA’s accountability to Congress in this area and allow the committees to exercise their oversight responsibility. Such information would provide GSA, OMB, and congressional decision makers with critical, transparent information on how to strategically manage GSA’s real property portfolio. To enhance transparency and allow for more informed decision making related to the appropriate role of leasing in GSA’s real property portfolio, we recommend that the Administrator of GSA take the following three actions: Include in the lease prospectus a description of the length of time that an agency estimates it will need the space, an historical account of how long the agency has been in the particular building it is occupying at the time of the prospectus, and any major investments the agency will have to make to the leased space to meet its mission. For those spaces for which the agency has a long-term projected need, also include an appropriate form of cost-to-lease versus cost-to-own analysis. Report to the appropriate congressional committees any leases above the prospectus-threshold that did not follow the congressional prospectus process. Develop and use criteria to rank and prioritize potential long-term ownership solutions to current high-value leases among other capital investments. Use this ranking to create a long-term, cross-agency strategy that facilitates consideration of targeted investments in ownership. This strategy could be incorporated initially as a separate but related part of the capital plan we previously recommended that GSA develop in 2012, or integrated into the capital plan itself. We provided a draft of this report for review and comment to GSA and OMB. We also provided a draft of this report for review and comment to several other agencies we spoke with during the engagement because they are tenants of GSA leases, including EPA, HHS, the U.S. Department of Agriculture (USDA), Department of Defense (DOD), Department of Justice (DOJ), and Department of Transportation (DOT). GSA concurred with our recommendations and provided technical clarifications, which we incorporated as appropriate. GSA’s comments are discussed in more detail below. GSA’s letter is reprinted in appendix II.provided technical clarifications, which we incorporated where appropriate. EPA, HHS, USDA, DOD, DOJ, and DOT did not provide comments on the draft report. OMB did not comment on the draft report or recommendations but GSA agreed with the report’s recommendations and stated that it will take action to implement them. GSA stated that it remains committed to sharing all available client and market information with Congress in the prospectus process. However, GSA raised the concern that some information may not be included in prospectuses due to requirements of GSA’s competitive real estate procurement process and today’s uncertain budget environment. We agree that GSA must adhere to the requirements of its competitive procurement process in carrying out the prospectus process. However, in most cases, the additional information we recommended be incorporated into prospectuses either has been included in prospectuses in the past—such as a lease versus purchase analysis—or is general information. Moreover, the information we recommended be included, even if it was modified to some degree to ensure adherence to GSA’s competitive procurement process, would provide valuable information to Congress that could help inform its decisionmaking in this area. Regarding the uncertain budget environment, we reiterate that as agencies work to cut costs through increased space efficiency and telework, it could be an ideal time to make carefully targeted investments into owned facilities that wouldhelp move the federal government out of long-term, high-value leases and into efficient, federally owned space with lower long-term costs. Improved transparency in GSA’s lease prospectuses could help Congress fully understand the length of an agency’s space needs and the costs of continuing to handle these long-term needs through leasing rather than ownership. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Director of the Office of Management and Budget; the Administrators of General Services and Environmental Protection; and the Secretaries of Agriculture, Defense, Health and Human Services, Justice, Transportation, and State. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512- 2834 or wised@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our objectives were to (1) identify the characteristics of the General Services Administration’s (GSA) high-value leases and what actions, if any, GSA has taken to reduce their cost, and (2) assess the extent to which GSA’s capital-planning approach promotes informed decision making about leasing versus ownership. To identify the characteristics of GSA’s high-value leases, we analyzed data provided by GSA from GSA’s Real Estate Across the United States (REXUS) database to determine the number of active leases as of November 30, 2012, including those with a net annual rent at or above the fiscal year 2012 prospectus threshold of $2.79 million. We determined that 218 of GSA’s leases met the criteria of being at or over this prospectus threshold, and we defined these as high-value leases for purposes of this report. We used data from GSA’s central data system provided in building, space, lease, and rent files in our analysis to select and characterize the population of high-value leases. To determine whether these data were of sufficient reliability for our analysis, we reviewed the program documentation associated with the files and discussed various data elements with GSA staff responsible for the data. We also conducted our own electronic testing to check the consistency of the data and to reconcile the accuracy of certain lease numbers. We did not attempt to evaluate or test all of the aspects of the GSA data files, but instead focused on the high-value leases. As a result of our review and discussions, we determined that the data in the files provided by GSA were of sufficient reliability to be used in our analysis and for the purposes of this report. We analyzed data on each of these assets to describe characteristics of these leases, including the amount of leased space, net annual rent, rentable square footage, lease term, and tenant. We also analyzed GSA data to determine the total number, square footage, and cost of all of its leases. We also reviewed the prospectuses or other legislative authority for the 218 high-value leases. For the 9 of the 218 high-value leases that did not have a prospectus, we obtained additional clarification from GSA officials. In addition, we reviewed GSA documents such as The State of the FY2011 Portfolio to obtain general information on the agency’s real property portfolio. To inform both objectives, we selected a non-generalizable sample of 12 high-value leases from the list of 218 we had identified to examine more closely as case studies. In selecting the 12 case study leases, we focused on leases that were near expiration or had recently been entered into in order to facilitate discussions with GSA on its decision-making process for these leases. We also focused on leases that were in holdover status, represented a variety of tenant agencies, and had a variety of net annual rents from close to the prospectus threshold of $2.79 million to significantly above the prospectus threshold. Because the majority of high-value leases are located in GSA’s National Capital Region (representing Washington, D.C., and parts of Northern Virginia and suburban Maryland), we selected six of the leases from that region. We selected the other six leases from the Northwest/Arctic, Pacific Rim, and Greater Southwest Regions because these regions were geographically diverse and had a relatively large portfolio of high-value leases. These 12 selected leases represented space for 14 different federal tenants, with rentable square feet ranging from over 99,000 to almost 802,000. Their net annual rent ranged from about $2.9 million to almost $20 million. For these leases, we reviewed numerous documents, including the lease contract and supplemental lease agreements, prospectus, House and Senate authorizing committees’ resolutions approving the prospectus, scoring analysis, and space plan. We also interviewed officials most knowledgeable about these leases from GSA regional and local offices and from the tenant agencies. In addition, we reached out to the lessors and, to the extent that they were willing, interviewed them about their experience working with GSA on the lease. We toured several of these buildings to inform our discussion of these leases. Our findings from these case studies cannot be generalized to the universe of 218 high-value leases we identified or to GSA’s leased portfolio. However, they illustrate examples of broader challenges and opportunities GSA faces in managing its high-value lease portfolio. In addition, we reviewed relevant legislation, GSA guidance, our prior work, and industry reports and studies related to federal leasing of real property. We interviewed GSA headquarters officials and regional officials in the National Capital, Northwest/Arctic, Pacific Rim, and Greater Southwest Regions. Together, these regions have more than 70 percent of the 218 high-value leases we identified. We also interviewed OMB staff, GSA Inspector General Office officials, and numerous private sector officials with experience in working with GSA on high-value leases. To assess the extent to which GSA’s capital planning approach promotes informed decision making about leasing versus ownership, in addition to the above steps, we analyzed our and OMB’s work on leading practices in capital planning. We identified leading practices for using information to make capital investment decisions from GAO’s Executive Guide and OMB’s Capital Programming Guide. We also drew from the National Research Council’s research in this area. In addition, we reviewed our recent work on capital planning in the context of GSA’s Federal Buildings Fund. We assessed whether GSA’s guidance practices conformed to the criteria established in these guides in the areas of alternatives evaluation, project prioritization, and long-term capital planning. We reviewed GSA documents, including its Leasing Desk Guide, budget requests for the 4 fiscal years, most recent capital plan (fiscal year 2011), most recent call to regions regarding prospectus level leases—the Capital Investment and Leasing Program (CILP plan), criteria for ranking proposed capital projects, and GSA data and information on holdovers in 2012 and lease losses for fiscal years 2005 through 2011. We also interviewed GSA officials in the Office of Portfolio Management to understand GSA’s perspective on capital planning in the context of its high-value leased portfolio. We conducted this performance audit from September 2012 to September 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, David Sausville (Assistant Director), Carol Henn, Joshua Ormond, Kelly Rubin, Larry Thomas, Jim Ungvarsky, Pamela Vines, Crystal Wesco, Alwynne Wilbur, and Jade Winfree made key contributions to this report. High-Risk Series: An Update. GAO-13-283. Washington, D.C. February 2013. Federal Real Property: Improved Data Needed to Strategically Manage Historic Buildings, Address Multiple Challenges. GAO-13-35. Washington, D.C.: December 11, 2012. Federal Real Property: Strategic Partnerships and Local Coordination Could Help Agencies Better Utilize Space. GAO-12-779. Washington, D.C.: July 25, 2012. Federal Buildings Fund: Improved Transparency and Long-term Plan Needed to Clarify Capital Funding Priorities. GAO-12-646. Washington, D.C.: July 12, 2012. Federal Real Property: National Strategy and Better Data Needed to Improve Management of Excess and Underutilized Property. GAO-12-645. Washington, D.C.: June 20, 2012. Federal Real Property: Overreliance on Leasing Contributed to High-Risk Designation. GAO-11-879T. Washington, D.C.: August 4, 2011. Federal Real Property: Progress Made on Planning and Data, but Unneeded Owned and Leased Facilities Remain. GAO-11-520T. Washington, D.C.: April 6, 2011. Federal Real Property: Strategy Needed to Address Agencies’ Longstanding Reliance on Costly Leasing. GAO-08-197. Washington, D.C.: January 24, 2008. Federal Real Property: Progress Made Toward Addressing Problems, but Underlying Obstacles Continue to Hamper Reform. GAO-07-349. Washington, D.C.: April 13, 2007. Executive Guide: Leading Practices in Capital Decision-Making, GAO/AIMD-99-32. Washington, D.C.: December 1998. Real Property Management Issues Facing GSA and Congress. GAO/T-GGD-92-4. Washington, D.C.: October 30, 1991.
Overreliance on costly leasing is one reason that federal real property has remained on GAO's high-risk list. GAO's work has shown that building ownership often costs less than leasing, especially for long-term space needs. For leases with a net annual rent above a threshold--$2.79 million in fiscal year 2012--GSA is required to submit a prospectus, or proposal, to Congress. GAO was asked to review these high-value leases. This report (1) identifies their characteristics and what GSA has done to reduce their cost and (2) assesses the extent to which GSA's capital-planning approach supports informed leasing decisions. GAO reviewed GSA data for all 218 active high-value leases as of November 2012 and selected 12 leases for case studies based on expiration dates, locations, and tenant agencies. GAO reviewed relevant legislation and guidance, interviewed agency officials, and compared GSA actions to leading practices. The General Services Administration's (GSA) 218 high-value leases GAO reviewed represented only about 3 percent of the total number of GSA leases, yet made up about one-third of its leased portfolio in terms of cost and size. GSA has reduced the costs of its high-value leases in line with the administration's goal to reduce real property costs. GSA's efforts include helping agencies improve space utilization. However, for leases nearing expiration, GSA and tenant agencies have faced challenges in funding space renovations and moving costs. This lack of funds has contributed to delays and some cases in which GSA continues to occupy space after the lease expires. GSA officials stated that for most high-value leases, federal ownership would be more cost effective over the long term, but GSA did not have the funding available to purchase, renovate, or construct a building. GAO found that GSA's capital-planning approach lacks transparency and a strategic focus that could support more informed decision making in this area. Specifically, GSA does not follow capital-planning practices involving alternatives evaluation, project prioritization, and long-term capital planning: GSA's lease prospectuses do not discuss the length of time of the space need or alternative approaches to meeting it--which are key to understanding whether leasing or owning would be more cost-effective. Twenty-seven of the prospectuses (for leases expiring from 2012 through 2027) contained an analysis that showed potential savings of over $866 million if the spaces were owned rather than leased. GSA and OMB have decided the analysis is no longer necessary in light of the lack of capital funding for acquisitions and construction. GAO's case studies highlighted long-term, mission critical space needs, such as a lease for the Environmental Protection Agency in Seattle for space it has occupied for over 40 years. Another high-value lease is for the State Department's diplomatic security bureau in Virginia. State invested at least $80 million in security upgrades into a facility that GSA leased for 10 years. Further, GAO found that nine ongoing high-value leases did not go through the prospectus process. For example, GSA mistakenly did not prepare a prospectus for a 10-year Los Angeles lease for the U.S. Army Corps of Engineers. GSA did not notify Congress of these leases, further limiting transparency. GSA has not systematically prioritized which space needs currently in high-value leases it would be most beneficial to move to federally-owned solutions. GSA has not incorporated space needs that are the highest priority for ownership investment into a long-term capital plan. This lack of information on the long-term consequences, including costs and risks, of high-value leases could inadvertently contribute to the federal government's overspending on long-term space needs. In contrast, a strategic vision incorporating leading practices for capital decision making could better position the government to save money over time. Increased transparency could promote collaboration with decision makers, which could help GSA address challenges and identify cost savings opportunities as leases expire. GSA should enhance the transparency of decision making for high-value leases by (1) including more information in the prospectus to Congress, such as the agency's prior and future need for the space, major investments needed, and an appropriate analysis of the cost of leasing versus the cost of ownership; (2) reporting to congressional committees about certain leases without a prospectus; and (3) prioritizing potential ownership solutions for current high-value leases to help create a long-term strategy for targeted ownership investments. GSA concurred with the recommendations.
Customs’ mission is to (1) ensure that merchandise and persons entering and exiting the United States do so in compliance with U.S. laws and regulations and (2) collect revenue from international trade. Customs collected $22.1 billion in revenue at more than 300 ports of entry in fiscal year 1998. Customs performs its mission with a workforce of nearly 20,000 personnel at its headquarters, 20 Customs Management Centers, 20 investigative offices, 5 Strategic Trade Centers, and 301 ports of entry around the country. Customs established a two-step procedure to process merchandise imported into the United States. During the first step, known as cargo release, Customs assumes direct control of the merchandise and uses an inspection process to verify that the cargo meets import requirements and is properly and accurately documented. When Customs determines that these requirements have been met, the cargo is released. During the second step, referred to as entry summary, Customs selects for review some of the detailed paperwork that has been submitted by the importer. Customs subsequently liquidates the importation (completes the transaction) after determining that the appropriate import duty has been paid. Although cargo release and entry summary are Customs’ major programs for ensuring compliance with trade laws, its commercial fraud, fines, penalties, and forfeitures program is its major weapon against violators of these laws. Customs also assesses liquidated damages when an importer does not comply with regulations. Civil monetary penalties, on the other hand, are assessed for violations, such as misclassification, knowingly falsifying the country of origin, and other fraudulent acts. Customs usually takes seizure actions when merchandise is illegal or not admissible to the United States. Although Customs agents, inspectors, and import specialists assess penalties and make seizures, it is the Fines, Penalties, and Forfeitures offices that are responsible for administrative processing and tracking of all liquidated damages, penalty, and seizure cases. Customs has been performing these activities for many years, long before the Mod Act, and continues to perform them in addition to its informed compliance efforts. For over 15 years, Customs has used its Automated Commercial System (ACS) to store and process import information and to manage import- related activities, such as collecting revenue and capturing trade statistics.ACS allows Customs to identify, track, and control imported merchandise during cargo release and entry summary liquidation processing. It also allows Customs to retrieve import information whenever needed. In the late 1980s, Customs recognized the need to overhaul, streamline, and update its automated data processing capabilities and reorient its business processes. Customs also realized that it needed to work with the trade community and Congress to forge legislation for meaningful change. After several attempts, compromise legislation acceptable to Customs, Congress, and the trade community was developed. This legislation, the Customs Modernization and Informed Compliance Act or Mod Act, which allowed Customs to automate its processes incrementally and allowed Customs to be flexible and innovative in redesigning its business processes, became law on December 8, 1993, as Title VI of the North American Free Trade Agreement Implementation Act. The Mod Act introduced two new concepts: informed compliance and shared responsibility. These concepts were premised on the theory that in order to maximize voluntary compliance with Customs laws and regulations, the trade community needed to be fully and completely informed of its legal obligations. In addition, Customs was to effectively communicate its requirements to the trade community, and the people and businesses subject to those requirements were to conduct their regulated activities in conformance with U.S. laws and regulations. The trade community was to use reasonable care in meeting its responsibilities. According to Customs, there is a general consensus that a “black and white” definition of reasonable care is impossible because the concept of acting with reasonable care depends upon individual circumstances. In lieu of a definition, Customs has issued a checklist of measures for importers to use as guidance in meeting the reasonable care requirements. Most import activity is attributable to a relatively small group of importers. In fiscal year 1998, Customs processed shipments with a total value of about $897 billion for more than 443,000 commercial importers. Only 1,000 of these importers, or less than 1 percent, accounted for about 60 percent of import value—a total of $538 billion. These percentages have remained fairly constant for several years, at least since fiscal year 1996. Customs determined that these top 1,000 importers are in a position to have a significant impact on trade compliance rates and introduced a “big player focus” towards trade compliance. In addition to big players, Customs directed its trade compliance efforts toward primary focus industries (PFIs). Customs selected industries as PFIs if they were considered vital to U.S. national interest on the basis of a number of factors, including strategic importance, international trade agreements, health and safety, and economic concerns. For fiscal year 1998, Customs selected the following PFIs for trade compliance attention: Critical Components (Bearings and Fasteners), Textiles and Wearing Apparel. In addition to focusing on big players and PFIs, Customs developed and implemented several key initiatives and actions as part of its informed compliance strategy, including (1) information programs, (2) compliance measurement, (3) compliance assessment, (4) account management, and (5) responses to noncompliance. The remainder of this report will discuss these five initiatives and actions, Customs’ implementation of them, and their results. Providing information to importers to inform them about trade laws, regulations, and Customs policies and procedures is not new; it has been going on for years. However, under its informed compliance strategy, developed as a result of the Mod Act, Customs enhanced its basic information program and developed a new targeted information program to provide the importing community with relevant information concerning its responsibilities and rights under Customs laws and regulations. Through these two programs, Customs provided importers with extensive information using the Internet, an electronic bulletin board, seminars, and informed compliance publications on such topics as value, classification, reasonable care, and recordkeeping requirements. Ports of entry around the country also provided informed compliance information to their local importing communities. Limited feedback that we obtained from several major importers indicated overall satisfaction with Customs’ informed compliance information efforts. According to the Commissioner of Customs’ May 1996 memorandum for trade community members on informed compliance strategy, the basic information program was intended for all parties involved in importing. Using the program, Customs would continue to issue rulings on the proper classification of imported give the trade community an opportunity to comment on draft regulatory documents by posting the documents on the Customs Electronic Bulletin Board (CEBB); establish an educational outreach program to educate the trade community on Mod Act responsibilities; establish a Customs Web server for dissemination of Customs information increase the knowledge of Customs staff through internal and external consider making information, such as Customs Bulletins, notices, and directives, available via CD-ROM. In accordance with its informed compliance strategy, we found that in calendar year 1998, Customs had issued over 13,000 rulings, posted 7 draft regulatory documents to the CEBB for public comment, and developed 13 informed compliance publications. In addition, in fiscal year 1998, Customs conducted over 130 internal and external educational seminars. Customs established its Web site on the Internet in August 1996, recording 1.5 million visits to the site in its initial year. Customs chose not to pursue distribution of such information as Customs Bulletins, notices, directives, and other informed compliance materials by CD-ROM because the information became accessible once the Web site was established. The CEBB was established in January 1991 to provide importers access to current, relevant Customs operations and trade information. Enhanced for informed compliance purposes, news releases, rulings, and about 25 other subject areas can be accessed through the CEBB. Almost all information on the CEBB can also be accessed through the Customs Web site. In March 1999, we accessed CEBB files through the Web site and found that one subject area, Mod Act Information, contained 70 information files, including draft and final regulations, Customs’ informed compliance strategy, and numerous informed compliance publications. These files included, for example, informed compliance publications that discussed reasonable care and recordkeeping requirements. According to Customs, the CEBB will eventually be phased out and all data integrated into the Web site. The Customs Web site contains an extensive array of information, including regulations and rulings, merchandise tariff classification and entry procedures, marking requirements, and informed compliance strategy and publications. Web page selections include such topics as “About U.S. Customs” and “Importing and Exporting.” As of April 1999, over 10.6 million visits to Customs’ Web site had been recorded since it was established in August 1996. As part of its efforts to educate the importing community on its responsibilities, Customs developed an informed compliance publication series. The publication series entitled “What Every Member of the Trade Community Should Know About:***” addressed trade issues, such as merchandise classification, customs value, and reasonable care. Thirty- four trade topics have been covered in this series since its inception in 1996. Customs received positive feedback from the trade community about this series and its applicability toward understanding informed compliance responsibilities. According to the Commissioner’s May 1996 memorandum, the targeted information program was designed to provide information and assistance to the importers beyond that provided through the basic program. The targeted information program was primarily aimed at industries and certain trade segments that required special efforts to deal with compliance issues. The targeted information programs used a variety of communication methods, including development and distribution of industry- and/or commodity-specific seminars and industry association sponsored meetings, importer visits, and videotapes. Customs has produced a number of commodity- and industry-specific publications under its “What Every Member of the Trade Community Should Know About:***” series. In fiscal year 1998, such publications as Ribbons & Trimmings, Footwear, and Lamps, Lighting and Candle Holders were issued as guides to help with classification of these commodities. Customs also produced newsletters and other publications for specific industries. One newsletter entitled Production Equipment Trade Educator focused on classification and valuation of production equipment. Another was The Auto Book: A Practical Guide to Classification of Vehicles, Parts and Accessories under the Harmonized Tariff Schedule, geared, of course, to the auto industry. In addition, Customs officials made 69 presentations to the trade community across the country on specific topics, such as bearings, production equipment, and wood products. Presentations were made at industry association meetings, ports of entry, and trade conferences. Customs did not request formal feedback from the trade community as a means of assessing satisfaction with the information presented at its seminars. However, Customs officials told us that they had received letters from the trade community that were complimentary of the presentations and the usefulness of the information provided. Furthermore, Customs officials visited many importers to discuss new programs and initiatives and to provide instructions on how to properly classify imported merchandise. Customs did not compile information on the number of visits made to importers. Customs also issued three videos on topics considered of high interest to the trade community: Account Management, Informed Compliance, and Textile Rules of Origin. According to the Commissioner’s May 1996 memorandum, ports of entry were also to develop and implement informed compliance activities to ensure that the local trade community was informed of trade laws and regulations and Customs policies and procedures. We visited two ports of entry, Seattle, WA; and Los Angeles, CA, to gain an understanding of local informed compliance activities. The Seattle port of entry published a trade newsletter, held monthly meetings to discuss issues of concern to the importing held port-sponsored seminars and workshops several times a year, held open house events and tours to meet and greet trade representatives, visited importers to promote informed compliance activities. The Port of Los Angeles (Los Angeles Airport and the Los Angeles/Long Beach Seaport) issued public bulletins to notify the trade community of activities and held monthly and quarterly meetings with importing trade associations, held port-sponsored seminars, and held open house events that included a tour of the airport Customs facility. Customs officials told us that although specific documentation was not compiled, ports of entry across the country have been involved in informed compliance activities. The officials stated that some ports, however, were more proactive than others and had organized numerous activities; others had few activities. As part of our review, we asked for the views of nine importers regarding the basic and targeted information programs. We asked the importers a series of questions, including whether (1) they used the Customs Web site, (2) Customs seminars they attended were informative, and (3) they felt that Customs was doing a good job providing information to assist them to voluntarily comply with Customs laws and regulations. All nine importers we interviewed responded that they thought the Web site was very useful as well as a great source of information. The importers said that they checked the Web site frequently for relevant, current information. Some importers also commented, however, that although the Web site provides importers greater access to Customs information, there is a great deal of information to sort through to find what may be relevant to a company. Many importers also stated that Customs’ presentations at various seminars were generally informative. A few of the importers suggested that Customs act more quickly in holding seminars, once new changes or new programs were introduced. Several of the importers we interviewed commented that the publications were informative and provided a good source of basic level information. Overall, importers we interviewed said Customs’ efforts to provide the trade community with adequate and timely information were generally sufficient, and its efforts to keep the trade community informed had improved since the Mod Act. In response to Mod Act requirements, Customs began in fiscal year 1995 to measure and report to Congress on the importing community’s level of compliance with trade laws and regulations. In fiscal year 1996, Customs established goals to attain overall trade compliance rates of 90 percent and PFI compliance rates of 95 percent by fiscal year 1999. Overall trade compliance rates, however, have remained static at about 81 percent from fiscal year 1995 through fiscal year 1998. PFI rates have also remained static at nearly 84 percent from fiscal year 1996 through fiscal year 1998. Customs recently extended both goals out to fiscal year 2004. Customs also established a goal to collect at least 99 percent of revenue due, which was last achieved in fiscal year 1996. Projected revenue collection rates have decreased from 99.37 percent in fiscal year 1995 to 98.35 percent in fiscal year 1998. This amounts to projected net revenue underpayments increasing from $135 million in fiscal year 1995 to $343 million in fiscal year 1998. Customs describes compliance measurement as a process of physical inspections of merchandise and/or Customs entry summarydocumentation reviews to determine the compliance rate of transactions. Compliance measurement is a statistically valid method of determining compliance by means of examinations that are based on Harmonized Tariff Schedule classifications. Compliance measurement results enable Customs to assess the performance of major industries, including PFIs, major importers, and its own performance concerning revenue collection and enforcement of trade laws. According to Customs, compliance measurement also provides the basis for working with importers in improving their compliance and in developing and implementing Customs’ strategies to improve compliance. In response to Mod Act requirements, Customs established the compliance measurement program on April 7, 1994. During fiscal year 1994, Customs trained port personnel responsible for conducting cargo inspections and document reviews and measured the compliance of 15 industries in preparation for overall program implementation. In fiscal year 1995, Customs conducted the first national compliance measurement of imports across the entire spectrum of the Harmonized Tariff Schedule to establish a compliance baseline for use in comparisons with future measurement and projections. Customs began to focus compliance measurement efforts on PFIs during fiscal year 1996 to determine compliance rates for specific industries importing automobiles, bearings, and textiles, among other commodities and merchandise; and to direct informed compliance efforts, such as seminars, toward targeted industries experiencing low trade compliance. During fiscal year 1997, Customs linked compliance assessment results with compliance measurement results to improve its capability to measure and identify noncompliance. This improvement was designed to allow Customs to perform a minimum number of inspections on compliant importers and an increased number of inspections on noncompliant importers. In its fiscal year 1998 Trade Compliance Measurement Report, Customs introduced the concept of significance into the compliance measurement process. Customs applied criteria to violations discovered during compliance measurement examinations and document reviews to differentiate between discrepancies, such as clerical errors, and more egregious or willful violations, including narcotics smuggling and intellectual property rights infringement. Measuring a violation’s significance allows Customs to focus its resources on the most significant trade violations. Since Customs started measuring and reporting compliance, overall and PFI compliance rates have remained static from fiscal year 1995 through fiscal year 1998 (see fig. 1). Customs officials attributed the static compliance rates, in part, to Customs’ increasing ability to detect noncompliance by conducting more thorough and uniform cargo examinations and document reviews and using sophisticated analytical tools. Customs officials explained that the more familiar inspectors and import specialists became with cargo inspected for compliance measurement, the more likely they were to detect discrepancies. Customs also credited the use of sophisticated analytical tools to analyze compliance measurement data, develop importer compliance profiles, and identify potential trends of noncompliance. According to Customs officials, these analytical tools greatly enhanced Customs’ ability to detect and react to trends indicating potential noncompliance that may otherwise have remained undetected. The conclusions of a Customs analysis of the auto/truck parts industry, however, may provide another explanation for static compliance rates. The analysis indicated that importers too small to justify the level of attention Customs affords large importers—for example, providing account managers or compliance assessments—had the lowest aggregate compliance rate and generated a disproportionate share of compliance discrepancies within the industry. The analysis concluded that unless the aggregate compliance rate for small companies improves dramatically, auto/truck parts industry compliance may never rise above 89 percent even if the compliance rate for large companies rises to 95 percent. It also concluded that Customs must pursue the challenge of raising small company compliance within the auto/truck parts industry. In addition, Customs acknowledged, in its fiscal years 1997 and 1998 Accountability Reports, that its goal of achieving 90 percent overall compliance and 95 percent for PFIs by 1999 as originally planned, and later adjusted to the year 2000, was overly optimistic. According to its Fiscal Year 2000 President’s Budget Justification Materials, Customs anticipates achieving both goals by fiscal year 2004 but acknowledged that further adjustments may be needed as more experience is gained. Customs officials stated that these goals are also dependent on budgetary resources and automation funding. Customs reported an overall compliance rate and a significance compliance rate in its 1998 Trade Compliance Measurement Report. The 89 percent significance compliance rate was higher than the 81 percent overall compliance rate. Customs stated that for compliance measurement, a discrepancy is indicated whenever any of the diverse trade laws, regulations, and agreements are violated. This is, in effect, a “letter-of-the- law” definition of discrepancy that has been used since the beginning of compliance measurement. In an attempt to increase the relevance of compliance measurement, however, Customs established a task force in 1997 to review the discrepancy definitions and apply a standard for significance. The task force identified criteria to distinguish major discrepancies involving illegal narcotics, intellectual property rights, and forced labor violations, among others, which Customs always considers significant, from nonmajor discrepancies such as clerical errors. Customs applied its standard for significance to the compliance measurement process to identify and address major compliance problems before considering less important or inconsequential issues. Customs officials told us that they intend to continue compiling and reporting both overall and significance compliance rates and would not limit their compliance measurement program to one or the other. The officials did, however, expect to have an internal dialogue about the significance discrepancy definition applied to compliance rates and its place and use in compliance measurement. Although compliance rates have remained static from fiscal years 1995 to 1998, projected revenue collection rates have decreased for the same period, from 99.37 percent in fiscal year 1995 to 98.35 percent in fiscal year 1998. This decrease amounted to projected net revenue underpayments increasing from $135 million in fiscal year 1995 to $343 million in fiscal year 1998 (see fig. 2). The projected revenue collection rates decreased and the projected net revenue underpayments increased while total gross revenue collections dropped from $23.1 billion to $22.1 billion during this time period. In its fiscal year 1997 Accountability Report, Customs attributes the increase in projected net revenue underpayments to refinements in accumulating and projecting revenue data. Customs officials said that they were trying to reverse the situation but did not provide information about any steps that they were taking. A compliance assessment is a review of an importing company’s Customs systems and procedures, including internal controls, to ensure that the imports are in compliance with U.S. laws and regulations. The goal is to ensure maximum compliance. In fiscal year 1997, Customs estimated that it would take 8 to 10 years to complete compliance assessments at the top 2,100 importers based on the value of imports. However, because Customs completed only 209 compliance assessments from fiscal year 1996 through March 31, 1999, it appears unlikely that Customs will be able to achieve that goal. To expedite the lengthy compliance assessment process, Customs implemented a revised approach in July 1999, but it is too early to determine the impact of the revisions on Customs’ ability to meet its goal. Customs began conducting follow-up reviews at importers who had received compliance assessments in fiscal year 1998. The reviews were intended to determine whether importers had taken corrective action to improve their internal controls over imports and had improved compliance. However, Customs has not yet developed a methodology for evaluating the overall impact of compliance assessments on importer compliance with U.S. laws and regulations. Our analysis of 59 importers that had compliance assessments completed by the end of fiscal year 1997 raised some concerns about the impact of compliance assessments on overall compliance rates. In many cases, the compliance rates for the 59 individual importers were based on few examinations and were therefore not statistically valid, but they serve as indicators that compliance assessments may not be maximizing compliance at many importers that have received them. This analysis showed that from fiscal year 1996 to fiscal year 1998, compliance worsened for 20, improved for 27, and stayed the same for 4. Eight importers already were in full compliance, and they stayed that way. For many years, Customs has conducted regulatory audits of importer records to verify compliance with U.S. laws and regulations. In October 1995, Customs implemented a different kind of audit—compliance assessments. The primary focus of regulatory audits is to identify lost revenue and the primary focus of compliance assessments is to work with importers to ensure that their imports comply with U.S. laws and regulations. The Regulatory Audit Division is responsible for performing compliance assessments with assistance from import specialists, account managers (if assigned), and other staff, as needed. Compliance assessments include evaluating an importer’s operating practices and internal controls supporting its Customs-related activities. Assessments also include statistical sampling of entry transactions from the importer’s previous fiscal year. Each assessment involves a minimum review of compliance in five trade areas (classification, value, quantity, special duty provisions, and recordkeeping). The findings of compliance assessments are to be used to determine the frequency of future compliance measurement examinations. Companies are categorized as low, moderate, or high risk on the basis of compliance assessment results. According to Customs, poor compliance would mean higher risk and therefore more examinations. When a compliance assessment indicates the need for corrective action to ensure compliance, the importer is to be asked to prepare and implement a Compliance Improvement Plan. These plans are to outline the specific deficiencies that the importer needs to correct, how the operating practices and internal controls will be changed, and the time frame for taking corrective action. According to Customs, follow-up reviews are conducted to (1) verify that corrective action was completed and compliance improved and (2) determine whether the risk category can be changed and the number of examinations reduced. Customs targeted the top 1,000 importers on the basis of the value of imports and the top 250 importers by value in each of the 8 PFIs to receive compliance assessments; about 2,100 importers altogether. As of March 31, 1999, Customs had completed 209 compliance assessments (see table 1), and another 164 had been initiated. In fiscal year 1997 Customs estimated that it would take 8 to 10 years to complete the 2,100 compliance assessments with the existing staff and a completion rate of about 210 to 263 compliance assessments annually. However, Customs has not been able to complete nearly that number of assessments annually; 15 were completed in fiscal year 1996, 61 were completed in fiscal year 1997, and 92 were completed in fiscal year 1998. In both the fiscal year 1999 and 2000 budget submissions, Customs requested 100 additional auditors to perform compliance assessments. According to the narrative justifying these requests, 250 additional auditors over the current 400 were needed to put compliance assessments on a periodic cycle that will allow them to conduct assessments at targeted importers once every 5 years. Customs requested 100 new auditors because that is the optimum number that Customs believes it can train and assimilate into the program at one time. The Treasury Department approved the fiscal year 1999 budget request for 100 additional auditors, but the Office of Management and Budget did not. The Treasury Department did not approve the fiscal year 2000 budget request. Customs was planning to include 100 additional auditors to perform compliance assessments in the fiscal year 2001 budget request. The Director of the Regulatory Audit Division told us that action has been taken to expedite the compliance assessment process because these assessments have been lengthy and time consuming. For the 168 compliance assessments completed by September 30,1998, the median number of days elapsed was 428, and the median number of staff hours expended was 1,698. The Director told us that the staff hours were understated, however, because they include only Regulatory Audit staff hours. Total compliance assessment hours are unknown because Customs does not track hours spent by staff in other offices, such as Strategic Trade Center staff, who prepare importer profiles prior to the assessments, and import specialists. Customs had implemented three initiatives to expedite the compliance assessment process: establishing standards and guidelines for the length of compliance assessments, reducing the number of entries reviewed during an assessment, and establishing an importer-assisted assessment methodology designed to perform assessments more rapidly. According to the Regulatory Audit Division Director, the preliminary results of these initiatives suggest the potential to shorten the compliance assessment process, but further experience is needed to know just how much impact they will have. In November 1997, the Regulatory Audit Division established a 9-month (270-day) target for completing compliance assessments from the entrance conference with the importer through completion of a compliance assessment report. Fourteen of 18 compliance assessments started since the new policy was issued and completed by March 31,1999, were completed in less than 270 days. The median number of calendar days elapsed for the 14 assessments was 220. The median number of days elapsed for the other four assessments was 291 days. The Regulatory Audit Division also developed staff hour guidelines for performing compliance assessments. The guidelines state that staff hours expended should vary depending on the scope of the compliance assessment, whether a compliance improvement plan is needed, and other factors. The Regulatory Audit Division Director told us that he uses 1,500 hours as a general rule of thumb for planning staff resource utilization. Using 1,500 hours as the criterion for the number of staff hours expended, we found that 16 of 18 compliance assessments initiated and completed since the new policy was issued required less than 1,500 hours; and the median number of staff hours expended was 1,024 hours. The other two assessments took 1,668 and 2,883 staff hours to complete, respectively. In July 1999, the Regulatory Audit Division reduced the maximum sample size of entries to be reviewed from 220 to 100 for most trade areas. Prior to adopting the reduced sample size, Customs tested using the smaller sample size at five importers but did not perform a detailed analysis of the impact on staff hours and calendar days. Customs prepared a brief summary, however, which indicated that smaller samples provided sufficient coverage, reduced workload for both Customs and the importer, and reduced the time needed to perform compliance assessments. A process called Controlled Assessment Methodology (CAM) was developed to allow importers to voluntarily perform much of the compliance assessment with verification by Customs auditors. CAM has the same test and sampling parameters as a standard compliance assessment, except that the importer is to provide staff to assist in the assessment. Customs prepares a written work plan that includes applicable audit steps and time frames for the importer to perform. When the work is completed, Customs verifies its accuracy. The Regulatory Audit Division expects that some importers will be willing to choose this option for several reasons, including (1) a less intrusive compliance assessment process; (2) improved importer understanding of their own operations; and (3) elimination of duplicate effort, which frequently occurs when importers self-assess their efforts in advance of the Customs assessment without Customs guidance. As of April 19, 1999, compliance assessments had been completed at 13 importers that elected to participate in CAM. According to the Regulatory Audit Division Director, early experience with CAM suggests that it does expedite the completion of compliance assessments, and its impact on Customs staff resources and length of compliance assessments will need to be monitored. The objective of a follow-up review is to determine if corrective actions noted in the importer’s compliance improvement plan were implemented and whether they were effective in correcting deficiencies. The Regulatory Audit Division Director stated that follow-up reviews are the critical final step of the compliance assessment process and should demonstrate whether compliance assessments are improving importer operating practices, internal controls, and compliance rates. In fiscal year 1998 Customs developed guidance for performing follow-up reviews and performed a limited number. Customs performed seven follow-up reviews in fiscal year 1998, including reviews of three importers originally categorized as high risk and four categorized as moderate risk. The reviews resulted in six importers being recategorized to low risk and one recategorized from moderate risk to high risk. For the importer recategorized from moderate to high risk, Customs found that, among other things, the importer had not fully implemented corrective actions and did not correctly value imported merchandise. Follow-up reviews were included in the annual audit planning process for the first time for fiscal year 1999. As of July 19, 1999, Customs estimated that it would start and/or complete at least 41 follow-up reviews by the end of fiscal year 1999. Improved compliance and increased revenue collection were identified by the Regulatory Audit Division as performance measures for the compliance assessment initiative. However, the Director told us that although these performance measures are important, because of other work priorities and limited staffing, the impact of compliance assessments on improving importer compliance with U.S. import laws and regulations and increasing revenue collections had not been determined as of the end of our fieldwork in July 1999. In the absence of a Customs evaluation of the impact that compliance assessments have on importers’ compliance with U.S. laws and regulations, we analyzed compliance rates for all 59 importers that had compliance assessments completed by September 30, 1997, and had received compliance measurement exams in both fiscal year 1996 and fiscal year 1998. Although the number of compliance measurement examinations that these importers received (see app. II) was usually not sufficient to calculate statistically valid compliance rates, the compliance rates serve as an indicator about whether or not overall compliance has improved. Our analysis of all 59 importers showed that compliance rates worsened for 20, improved for 27, and stayed the same for 4. Eight importers already were in full compliance (100 percent compliance) in fiscal year 1996 and stayed that way. The Regulatory Audit Division Director agreed that this analysis, although not based on statistically valid compliance rates, does have some usefulness for evaluating compliance. He further indicated that the Regulatory Audit Division had been giving priority to other activities, such as revising the compliance assessment process, and that he plans to begin focusing on developing a methodology to measure the impact of compliance assessments. A compliance rate analysis similar to the one we performed would be one piece of this methodology, according to the Director. We interviewed nine importers to obtain their views regarding the advantages and disadvantages of the compliance assessment process and to determine whether they had any suggestions for improvement. Eight of the nine importers felt that their import operations benefited as a result of the compliance assessment. Seven importers indicated that the compliance assessment provided an independent review of import operations that identified both strengths and weaknesses in the internal controls, as well as recommendations on how to correct the weaknesses. Two importers indicated that after the compliance assessment, they had more confidence in the quality of their systems. In addition, two importers indicated that they had used their systems, after making any corrections on the basis of the compliance assessment, as the model for import operations at other company divisions or locations. Three other importers said they made organizational changes or increased staffing on the basis of the compliance assessment to better ensure future compliance. One importer felt it had not received any benefits from the compliance assessment. The importer felt that way because it was already highly compliant, as evidenced by the low-risk rating it received from the compliance assessment. Six importers interviewed commented on the length of the assessment; the resultant cost to their operations; and the amount of staff resources dedicated to preparing for, and providing information to, the auditors. Two importers felt that the compliance assessment process should be more standardized because of differences in the process identified from discussions with other importers about their compliance assessments. Three importers indicated that Customs should demonstrate more commitment to working with them, and one importer commented that Customs should be less adversarial during the compliance assessment. It should be noted, however, that assessments performed on companies we interviewed had been completed early in the program when Customs was still designing and refining the basic compliance assessment process. The assessments were also completed before Customs began to revise and expedite the compliance assessment process, as previously discussed. Account Management is Customs’ approach to managing its work through accounts—importers—rather than by individual merchandise transactions at the ports of entry. According to Customs, an account manager is to maintain a liaison with the account, provide information under the principle of informed compliance, help ensure uniform treatment of an account’s merchandise at all ports, and help the company identify and resolve any areas of noncompliance. In fiscal year 1997 Customs identified 7,405 major importers as candidates for the account management program. Customs hopes to eventually assign managers to all 7,405 importers depending on availability of staff resources. However, Customs had not developed a plan or time frame for assigning account managers to the importers and had not determined the level of staff resources that would be necessary to manage the accounts. Customs had assigned account managers to 604 importers from fiscal year 1995 through fiscal year 1999. On the basis of current progress and staffing, it will be several years before all candidate accounts are assigned managers. Moreover, Customs may not have enough staff resources to assign account managers to all candidate importers. Customs also had not evaluated whether its investment in the account management program has had any positive impact on improving importers’ compliance rates. Customs had identified several performance measures for the account management program, including increased compliance, uniformity, and customer satisfaction, but was just beginning to develop the methodology for collecting data as of July 1999. Account management is Customs’ approach to viewing an importer (an account) in the aggregate rather than by each merchandise entry transaction. It includes analysis of an account’s compliance nationwide, coordination of all Customs activities involving the account, and identification and resolution of compliance problems. Account management also provides a point of contact within Customs to assist the account. The National Account Service Center (NASC) at Customs headquarters is responsible for managing both the national and port account programs. National account managers are devoted full-time to account management and are assigned by NASC to the largest importers. The national account program was prototyped with eight accounts from February 1996 through February 1997 and implemented nationwide in May 1997. As of September 30, 1999, 25 national account managers were assigned an average of 6.2 accounts each, with a range of 2 to 9 accounts each. For port account team members, account management is a collateral function. Port account teams are led by import specialists and may include additional import specialists, cargo inspectors, and other personnel. Port accounts are selected by the ports in coordination with NASC and must be approved by NASC. The port account program was prototyped at 12 ports with 12 accounts from February 1997 through August 1997. It was implemented in the prototype ports in October 1997 and in 31 other ports in February 1998. The port account program is conducted at 43 ports designated as “service ports,” which have a full range of cargo-processing functions. The size and composition of port account teams vary on the basis of account size and staff availability, according to the NASC Director. Most teams include a minimum of two import specialists. The team assigned to an importer is to be from one of the top five ports through which the importer enters merchandise on the basis of import value. The account management cycle consists of six steps: 1. selecting an importer and assigning an account manager; 2. contacting the account; 3. developing a profile of the account’s import activities and history; 4. evaluating the account’s internal controls identified in an internal controls questionnaire completed by the importer, preparing an account action plan, and obtaining Customs and account approval of the action plan; 5. monitoring implementation of the account action plan; and 6. maintaining the account after the action plan items are completed. Maintaining an account (step 6) includes monitoring compliance rates, coordinating outreach/improvement activities, and identifying additional areas for improvement. At this step, the amount of time required by Customs to manage the account is expected to decrease; and the full benefit of account management is expected to be realized because the importer would have adequate internal controls and a high compliance rate, according to the NASC Director. Customs identified the top 378 importers by value of imports as possible candidates to be assigned national account managers. These companies represented 50 percent of the value of imports as of September 30, 1996. The next group of 7,027 companies (ranked 379 to 7,405) were identified as possible candidates to become port accounts because they each imported over $10 million annually. These companies represented the next 32 percent of the value of imports. Within these two groups, Customs prioritizes individual importers for possible assignment of an account manager or team, using a risk score that is based on import value, compliance rate, number of line items, its ranking in the top 250 companies within a PFI, and having at least 50 percent of imports in a PFI. Although NASC selects importers to be assigned national account managers, the ports select importers in coordination with NASC, and these selections must be approved by NASC. NASC has not developed a plan for assigning account managers to all 7,405 candidate accounts, according to the NASC Director. The Director also told us that the specific level of staff resources necessary to manage all potential candidate accounts had not been determined, but with current resources Customs will not be able to assign account managers to all candidates in the pool. In lieu of an assignment plan, NASC was gradually assigning additional accounts to the national account managers and ports on the basis of their ability to take on additional accounts and on the progress of existing accounts. Customs had established an interim goal of having 600 accounts assigned by the end of fiscal year 1999—200 national and 400 port accounts. As of September 30, 1999, Customs had assigned 156 national and 448 port accounts for a total of 604 accounts (see table 3). The NASC Director cited five factors that hampered the establishment of additional national and port accounts. These factors were: the time required to manage the existing accounts, many of which had not the need to revise an internal control evaluation questionnaire given to difficulty persuading importers to sign an account action plan; delayed implementation of the ACE system to manage import activities; the part-time status of port account management teams, whose members have other duties to perform. Customs’ ability to assign account managers to additional importers was limited, in part, because many of the existing accounts were not yet in maintenance and still required a substantial amount of time to manage, according to the NASC Director. The Director expects the staff resources needed to manage accounts to be less in the maintenance step than earlier in the account management cycle. As shown in table 4, as of March 31, 1999, 46 accounts had reached maintenance, including 21 national accounts and 25 port accounts. In February 1999, Customs established a working group to redesign the internal control evaluation questionnaire so it could be used for both compliance assessments and internal control evaluations of accounts. This effort was intended to facilitate timely completion of the internal control questionnaire by accounts and to clarify that importers would not be asked to complete two slightly different questionnaires, as had been the practice in the past. At the time of our review, no target date had been established for implementing the new questionnaire. The NASC Director told us that several account managers had experienced significant difficulties and delays in persuading company officials to approve and sign the account action plan. Many importers reportedly believed that the signature made the action plan a contractual agreement, which led to delays while the importers and their attorneys reviewed the plan. Starting in February 1999, NASC made signature by an account official optional, which was intended to eliminate the importers’ concern about a contractual agreement and reduce delays. Delay in developing the ACE system to manage import activities has made preparing account profiles and monitoring accounts more difficult and time-consuming, according to the NASC Director. Under the present computer system, data on imports are captured by port and are not readily available on a nationwide basis. National data on a particular importer are not available without identifying all ports used by the importer and manually combining the data for these ports. Under ACE, nationwide data are to be available on a real-time basis on all importers for use by account managers and other Customs personnel to monitor—for example, national compliance rates for individual importers. Progress of port accounts was also hampered because account team members are part-time and have competing duties, according to the NASC Director. In responding to a survey at the end of the port account prototype, 9 of the 12 port account teams indicated that their other work suffered due to their having to manage the port accounts. In November 1998, NASC identified 12 “problem ports” where it considered progress with the port account program to be slow, and it imposed a temporary freeze on establishing additional accounts at those ports. According to the NASC Director, NASC staff visited many of these ports to encourage them to devote additional staff hours to port account management, take on additional port accounts, and/or do a better job reporting on port account activities. Customs officials anticipate that as the port account management program matures, port account managers will view it as a better way of doing their jobs because it will allow them to look at their work in the aggregate, not transaction by transaction. In addition, the officials believed that port account management will also assist port account managers in focusing their efforts in the areas determined to be noncompliant. The national account program was implemented in fiscal year 1997, with 25 full-time national account managers. Customs originally hoped to increase the number of national account managers to 100 in order to manage 1,000 accounts (about 10 accounts per account manager). Because Customs was not able to obtain funding to increase the number of national account managers, it reduced the number of potential national accounts from 1,000 to 378. Customs’ first two attempts to obtain funding to increase the number of national account managers were unsuccessful. Customs requested 80 additional national account managers in its fiscal year 1999 budget submission. The request was reduced to 50 by the Treasury Department and ultimately disapproved by the Office of Management and Budget. For fiscal year 2000, Customs requested 50 additional national account managers, but the Treasury Department did not approve the increase. Customs again planned to request 50 additional national account managers in its fiscal year 2001 budget submission. On the basis of current staffing, it is uncertain whether Customs has enough import specialists to assign to port account teams to manage many of the 7,027 candidate port accounts. As of December 31, 1998, Customs had a total of 1,002 import specialists based at the ports in the port account program. Dividing 7,027 candidate accounts by 1,002 import specialists means that each import specialist would need to serve on about 7 teams. Because a team normally has at least 2 import specialists, each import specialist would need to serve on about 14 teams, in addition to performing other duties. This is in sharp contrast to full-time national account managers, who were assigned an average of 6.2 accounts. In addition, Customs had no system for establishing accounts at the various ports. According to the NASC Director, the ports were initially allowed to request accounts without NASC guidance on how many accounts a port should be able to manage on the basis of staffing, workload, or any other criteria. Since January 1999, only ports where the number of import specialists was greater than the number of accounts were allowed to assign additional port accounts. The total number of accounts at these ports was limited to one account per import specialist. To determine whether a difference existed in the ratio of import specialists to port accounts at the various ports, and whether the difference had decreased since the new policy limiting assignment of additional port accounts, we compared the average number of import specialists per account as of both December 31, 1998, and September 30, 1999. As of December 31, 1998, we found that the average ranged widely: for example, Blaine, WA, had 16 import specialists and 1 port account; Charleston, SC, had 13 import specialists and 12 port accounts. Appendix III shows the number of import specialists, the number of accounts, and the average number of import specialists per account at each port. From January through September 30, 1999, 190 additional accounts were assigned to 36 ports. These assignments were consistent with the new policy in most of the ports, and the difference was reduced as shown in appendix III. NASC identified increased compliance, uniformity of entry summary reviews among import specialists and/or among ports, and customer satisfaction as account management performance measures in the August 1998 Account Management Standard Operating Procedures. However, as of July 1999, NASC was just beginning to develop the methodology for collecting data. According to the NASC Director, the delay was due to lack of staff resources and to staff turnover. To assess the impact on importer compliance with U.S. laws and regulations, NASC had planned to analyze the compliance rate of accounts within the account management program from year to year. NASC was working with the Analytical Development Division to develop a methodology for measuring account compliance, according to the NASC Director. No target date had been established for completing this methodology or for its implementation as of July 1999. NASC was in the process of developing a method to ensure the uniform treatment of merchandise imported by port accounts by sampling entry summary reviews for port accounts. Transactions from selected port accounts throughout the country would be reviewed to ensure that all ports were treating merchandise uniformly no matter through which port it entered. According to the NASC Director, the methodology was to be developed by October 1999 and implemented in January 2000. To obtain feedback on customer satisfaction, the NASC Director told us that he had begun meeting individually with importer officials. NASC had originally considered an annual customer satisfaction survey but decided to conduct interviews instead. We interviewed nine importers to obtain their views on the advantages and disadvantages of account management and to determine whether they had any suggestions for improvement. All nine importers indicated that they liked the account management concept, viewed it as a clear indicator of Customs’ commitment to work with the trade community, and had benefited from having an account manager. Specifically, the account manager served as a conduit of information about new Customs regulations and programs and about the results of Customs’ cargo examinations. Six importers had asked their account managers to resolve problems at a particular port or ports regarding the entry of merchandise, and they generally felt that the account managers had been fully responsive. None of the importers interviewed cited any disadvantages to being assigned an account manager, and all importers indicated that if given a choice they would opt to continue to participate in the program. Six importers had suggestions for improving the account management process. Four importers felt that they would benefit more from account management if their account managers were based closer to them. In one case the importer reported that it had requested and had been assigned an account manager based in the same city. One importer indicated that to better ensure uniform treatment by the various ports, account managers should be given authority to resolve disputes about entry classification, value, and other issues. One importer felt that it would have been more beneficial if the account manager had been assigned during or immediately after the compliance assessment to work on corrective actions, instead of 5 months after the compliance assessment was completed. Customs, according to its Trade Compliance Risk Management Process publication, may use informed or enforced compliance to ensure that importers comply with U.S. trade laws and regulations. We analyzed two of six Customs actions designed to address noncompliance within the informed and enforced compliance framework—the Multi-port Approach to Raise Compliance by the year 2000 (MARC 2000) and the Company Enforced Compliance Process (CECP)—and found that Customs’ efforts to raise overall compliance rates for importers in selected industries had mixed results. Customs’ trade compliance process has for years consisted of activities ranging from preimportation analysis through cargo arrival, examination, release, revenue collection, investigation, fines, penalties and forfeitures, and archival of trade data. Though these activities continue to the current day, the 1993 Mod Act led Customs to change the focus of its trade compliance process from a transaction-by-transaction based system to an account, or company/importer, based process. As part of its effort to make Mod Act-induced changes, Customs established a Risk Management Process to best allocate available resources to trade priorities. Customs concentrated on identifying industries and/or importers that represented the greatest risk of noncompliance and on taking the appropriate action to remedy the situation. According to Trade Compliance Risk Management Process, Customs’ risk management process consists of four key steps: (1) collecting data and information, (2) analyzing and assessing risk, (3) prescribing and taking action, and (4) tracking and reporting results. Customs relies on established programs, such as compliance measurement, compliance assessment, and account management, to collect data and information necessary to identify noncompliant industries and importers. After detecting and identifying the sources of noncompliance and analyzing and assessing the risk of continued trade violations, Customs decides what informed or enforced action is warranted and what resources are needed to address the problems. Over the last few years, Customs has developed a variety of tools, including MARC 2000 and CECP, to maximize trade compliance through an approach of both informed and enforced compliance. Customs, in fiscal year 1997, initiated the MARC 2000 project to raise compliance of targeted industries within the trade community. MARC 2000 evolved from a 9-month pilot program in fiscal year 1996, consisting of 12 ports working independently to raise the compliance of locally selected imports. After the pilot program, MARC 2000 involved multiple ports with common compliance issues that joined together to formulate and implement a national plan designed to raise compliance within four industries, including bearings, gloves, production equipment, and automobiles. Customs also initiated plans to include four other industries—lighting fixtures, plastics, headgear, and express consignment facilities—in MARC 2000. The informed compliance aspect of MARC 2000 included outreach efforts, such as seminars, importer counseling, presentations at association meetings, and publication dissemination to the targeted industries. In its fiscal year 1998 MARC 2000 Annual Report, Customs reported mixed results that did not clearly indicate success or failure. Fiscal year 1998 compliance rates for bearings and certain components of production equipment increased over fiscal year 1996 baseline compliance rates. Compliance rates for gloves and automobiles, however, fell below fiscal year 1996 baseline rates. Fiscal year 1998 compliance rates for these industries were all below the prior year’s (fiscal year 1997) compliance rates (see table 5). Furthermore, fiscal year 1998 compliance rates for these industries were all below Customs’ 95 percent compliance goal for PFIs. Customs’ fiscal year 1998 MARC 2000 Annual Report indicated that it would continue the program in fiscal year 1999 with some modifications. For example, Customs was to expand the focus in production equipment from presses and molds to welding equipment. Additionally, only those ports with an auto industry compliance rate below 90 percent were to continue conducting the automobile action. The remaining ports were to monitor auto industry compliance through continued compliance measurement. Finally, Customs was to address the possibility of requiring noncompliant bearings importers to pay duties, fees, and taxes prior to cargo release. The report stressed that enforced compliance actions were to occur when appropriate. According to Trade Compliance Risk Management Process, Customs determines whether to use informed or enforced compliance by taking into account the nature, scope, and impact of noncompliance. There are times when the informed compliance approach is not appropriate. After ongoing informed compliance efforts have failed, if voluntary compliance has not been achieved and repetitive compliance problems continue, Customs may take enforced compliance actions against violators. Examples of enforced compliance actions include initiating an investigation when criminal activity is suspected; seizing illegal cargo; making arrests when warranted; issuing penalties prescribed by regulation; requiring the payment of duties, fees, and taxes before cargo is released; and conducting additional compliance examinations. According to Customs, enforcement actions such as seizure and investigation are reserved for those instances of egregious violations; fraud; or ongoing, repetitive violations that could not be resolved through informed compliance. Customs began CECP in March 1998 to identify, target, and take action against individual importers with the most serious ongoing compliance problems. Under CECP, Customs monitors compliance measurement rates for major importers and develops in-depth reviews for those companies whose compliance measurement rates are below 90 percent in order to determine what should be done to address the continued noncompliance. Customs designates importers with continuously low compliance that have not made progress in existing compliance programs as “confirmed risk.” Customs begins enforced compliance action against importers designated as confirmed risk. Customs initially identified 32 companies with compliance rates below 90 percent and designated 4 of the 32 with stagnating or deteriorating compliance rates as confirmed risk on the basis of their fiscal year 1997 compliance rates. Customs provided the companies written notification indicating their confirmed risk status and subjected them to increased compliance measurement examinations for up to 7 months. Three of the importers ended fiscal year 1998 with compliance rates slightly above the fiscal year 1997 rates. The fourth importer’s fiscal year 1998 compliance rate dropped nearly 13 percent below its fiscal year 1997 rate. A preliminary review of the first two quarters of fiscal year 1999 compliance measurement data, however, indicated that the fourth importer’s compliance rate reached 100 percent. The other three importers’ compliance rates remained below 90 percent (see table 6). According to Customs, no other enforcement action had been taken against the confirmed risk importers because the companies were making progress. In September 1999, Customs recommended that the confirmed risk designation be dropped from three of the four companies. Customs will make its final decision and inform the companies of their new status in December 1999. By the end of fiscal year 1998, Customs, using CECP, identified 128 importers, including the 32 initially identified, with compliance rates below 90 percent for at least 1 fiscal year. Customs then determined which were the largest importers most likely to have a significant impact on industry compliance rates once they became compliant. After making its determination, Customs provided a list of 43 importers to Strategic Trade Centers, Customs Management Centers, assistant port directors, account managers, and members of the Strategic Planning Board responsible for recommending an enforced compliance action, among others, for review and feedback. Customs also generated and circulated a Trade Compliance Analytical Review (TCAR) containing compliance rates, compliance assessment results, descriptions of violations, and a recommended level of compliance measurement examinations for each of the 43 selected importers. Customs’ Strategic Planning Board, consisting of representatives from the Office of Strategic Trade, Office of Field Operations, Office of Investigations, and others, met on March 11, 1999, to determine and recommend compliance actions for the 43 importers. The Strategic Planning Board recommended a variety of actions, including increased compliance measurement examinations, referrals to ports for action, and continued monitoring through compliance examinations. The Strategic Planning Board did not recommend imposing any penalty enforcement actions, such as seizures or fines. According to Customs, the Strategic Planning Board makes subjective determinations, without specific criteria, when determining the course of action to improve importer compliance. The Strategic Planning Board relies on feedback provided by account managers, port account team leaders, and assistant port directors; analytical information contained in the TCAR reports; and discussions about importer progress towards improved compliance when deciding what enforcement actions, if any, to recommend. According to Customs, the Strategic Planning Board had not recommended enforcement actions such as seizures or fines against noncompliant importers identified through CECP because their trade violations were not significant enough to warrant such responses. Significant and willful violations such as narcotics smuggling and fraud have, of course, always been and will continue to be enforced in the traditional fines, penalties, and forfeitures environment outside of CECP. Under the Results Act, executive agencies are to develop strategic plans in which they, among other things, define their missions, establish results- oriented goals, and identify strategies they plan to use to achieve those goals. In addition, agencies are to submit annual performance plans covering the program activities set out in the agencies’ budgets (a practice that began with plans for fiscal year 1999). These plans are to describe the results the agencies expect to achieve with the requested resources and indicate the progress the agency expects to make during the year in achieving its strategic goals. Earlier this year, we testified that the strategic plan developed by the Customs Service addressed the six requirements of the Results Act. The plan’s goals and objectives covered Customs’ major functions—processing cargo and passengers entering and cargo leaving the United States. The plan discussed the strategies by which Customs hopes to achieve its goals. The strategic plan discussed, in very general terms, how it related to annual performance plans. It also contained a listing of program evaluations used to prepare the plan and provided a schedule of evaluations to be conducted in each of the functional areas. In addition to the required elements, we testified that Customs’ plan discussed the management challenges it was facing in carrying out its core functions, including information and technology, finance, and management of human capital. We concluded that the plan did not, however, adequately recognize several issues that could affect the reliability of Customs’ performance data, such as needed improvements in financial management and internal control systems. Along these lines, Customs’ fiscal year 2000 budget justification states that Customs needs to reassess a number of the performance goals. The justification also states that Customs will continue to refine its compliance measurement program in order to improve voluntary compliance. The justification also states that although Customs did not meet 12 of its 17 performance goals, it does not plan to change its basic approach to improving compliance, concluding that the performance goals that were established were too ambitious for the resources available. The justification does not, however, contain any plans for Customs to evaluate its approach to improving compliance, including the initiatives and actions that implement the informed compliance strategy: information programs, compliance measurement, compliance assessment, account management, and responses to noncompliance by importers. Customs will not be able to set realistic goals without the results of evaluations. The Mod Act represented a significant change in how Customs relates to the importing trade community. For over 200 years, Customs and the importing trade community had an enforced compliance relationship based on transaction-by-transaction scrutiny for compliance with trade laws. With passage of the Mod Act, Customs began to focus on informed compliance by importers, rather than the enforced compliance emphasis of the past. Although Customs has implemented five key initiatives and actions that constitute its informed compliance strategy, three of them are lagging in terms of the level of activity originally expected. Compliance rates, used to measure the effectiveness of these initiatives and actions, are showing no measurable improvement. Although Customs has monitored and evaluated certain aspects of the initiatives and actions, it has not evaluated, nor does it have a plan to evaluate, the impact on compliance of the overall informed compliance strategy. A properly designed and implemented evaluation would enable Customs to determine whether the overall informed compliance strategy is working and determine what contributions the initiatives or actions are making. This seems especially important since Customs may not be able to reach its goals in terms of coverage for the compliance assessment and account management initiatives. Given that both initiatives may stay far smaller than originally envisioned, it is important to determine what effect they are likely to have on compliance rates with the importer coverage they can reasonably achieve. Under the Results Act, agencies are to assess their performance against their goals and determine, for goals not achieved, whether the goals were too high, resources too scarce, or agency efforts too ill-managed. Customs has adjusted its compliance goals to reflect a 4-year delay because, according to Customs, the established goals were too ambitious for the resources available. An evaluation of the informed compliance initiatives and actions could provide Customs with the information it needs to maximize the use of the resources available for this program by enhancing what works and reducing or eliminating what does not. It could also provide the information needed for Customs to establish reasonable goals for the program. We recommend that the Commissioner of Customs develop and implement an evaluation of the effectiveness of its informed compliance strategy. We requested comments on a draft of this report from the Secretary of the Treasury. In a letter dated November 11, 1999, the Customs Service’s Director of the Office of Planning provided us with comments on the draft, which we have reprinted in appendix IV. Customs’ primary focus concerned the report’s recommendation, which Customs felt should be clarified to focus on the five compliance programs targeted by the report, and not on the entire broad piece of legislation that is the Mod Act. If the phrase “and the specific initiatives and actions it developed to implement the Mod Act…” were omitted from the draft recommendation, Customs believed it would be able to better target its response to the issues raised in the report. We agree with Customs and omitted the phrase from the recommendation to ensure Customs’ focus on evaluating its informed compliance strategy and not other parts of the Mod Act. Customs also believed that the report should recognize that its informed compliance efforts have been continually evaluated and refined, but our report conveys the opposite impression. Customs also stated that many monitoring and evaluation efforts are under way, and major component areas of informed compliance will continue to be analyzed and assessed. It said enhancements to programs and processes will also be implemented as appropriate. We stated that “While Customs has monitored and evaluated certain aspects of the initiatives and actions, it has not evaluated nor does it have a plan to evaluate the impact on compliance of the overall informed compliance strategy.” We agree with and support Customs’ ongoing monitoring, evaluation, and enhancement efforts of its many programs, including those related to informed compliance activities. However, we continue to believe that an evaluation, under the Results Act umbrella, of the initiatives and actions that implement the informed compliance strategy is necessary for Customs to be able to set realistic performance goals for improving importers’ compliance rates. Moreover, this evaluation could identify the contribution of each initiative and action toward achieving the overall goal of the informed compliance strategy and improving importers’ compliance rates. In addition, Customs stated that the report gives the impression that as the compliance rates have not risen to the levels anticipated, there is something inherently wrong with the informed compliance approach. Customs also stated that it believes there is a value to informed compliance above and beyond raising compliance, as comments from several importers that we interviewed indicated. We have not concluded that there is something inherently wrong with the informed compliance strategy and did not intend to give that impression. We stated in our conclusions section on page 37 that compliance rates, used to measure the effectiveness of informed compliance initiatives and actions, are showing no measurable improvement and that a properly designed and implemented evaluation could determine whether the overall informed compliance strategy is working and what contributions the initiatives or actions are making. If, after such an evaluation, Customs determines that one or more of the initiatives are not making substantial contributions to the overall goal of raising importers’ compliance rates, then either part or all of the informed compliance strategy should be reexamined at that time. In addition, we included comments from the major importers to show that there was indeed value to the informed compliance program, notwithstanding our concerns about the lack of progress in producing the benefits expected from the program. Customs also raised concerns about the correlation we make between the compliance assessment and its impact on compliance as indicated by an analysis of 59 importers (see p. 21). Customs believes that it is premature to draw any conclusions regarding the link between compliance assessments and compliance measurement because the programs measure different areas of compliance. Customs also believes that our conclusion that compliance assessments may not have improved compliance based on a drop in fiscal year 1998 compliance rates is premature and not sufficiently supported. Customs does not feel that sufficient analysis has been done to lead to that conclusion and requests that the analysis of compliance rates of 59 importers, many of which are not statistically valid, be removed from the report as the support for drawing the conclusion. In addition to the written comments from Customs on the results of our analysis of 59 importers and the impact on compliance from their compliance assessments, we had several discussions with Customs officials on this issue. Specifically, as further clarification on this issue, the officials believed that (1) because most of the compliance rates in our analysis are not statistically valid, we should reconsider using them as a basis for indicating the impact of compliance assessments; (2) it is premature to draw any conclusions regarding the link between compliance assessments and compliance measurement; and (3) compliance determined under a cargo examination (compliance measurement) is not identical to compliance as a result of a compliance assessment. The officials pointed out that, for example, the compliance assessment may conclude that an importer is not compliant because of unreported value in its merchandise. This is determined through an examination of the importer’s books and records. On the other hand, the officials noted that compliance measurement examinations may determine that an importer is not compliant because of inaccurate marking of merchandise. This would be determined by physical inspection of the merchandise, which could not be determined during a compliance assessment. As noted on page 21 of the report, although most of the compliance rates in our analysis are not statistically valid, they continue to provide an indicator about whether or not overall compliance improved at importers that had received compliance assessments. In addition, the Regulatory Audit Division Director agreed that our analysis, although not based on statistically valid compliance rates, does have some usefulness for evaluating compliance. As we also noted on page 16 of the report, compliance assessment is a review to ensure that a company’s imports are in compliance with U.S. laws and regulations, the goal being to ensure maximum compliance. Although a compliance assessment involves reviewing a company’s books and records, it also involves statistical sampling of entry transactions, including a minimum review of compliance in five trade areas, including classification, value, and quantity. This procedure appears to establish the link between compliance assessment and compliance measurement, since compliance assessment findings are used to determine the frequency of future compliance measurement examinations. It also appears that compliance measurement results could and should be used to analyze the impact of compliance assessments. As our limited analysis showed on page 21, compliance measurement rates serve as an indicator of whether or not overall compliance has improved. We have also included in the final report technical comments and suggestions from Customs as appropriate. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 10 days after its issue date. At that time, we will send copies of this report to the Honorable Sander M. Levin, Ranking Minority Member of your Subcommittee; the Honorable Raymond Kelly, Commissioner of Customs; and Mr. Robert Trotter, Customs’ Assistant Commissioner for Strategic Trade. The major contributors to this report are acknowledged in appendix V. If you or your staff have any questions on this report, please call Darryl Dutton on (213) 830-1000 or me on (202) 512-8777. To review the status of Customs’ implementation of the informed compliance strategy developed in response to the Mod Act and to determine the extent to which trade compliance under the new program had improved, we concentrated on five key initiatives. For overall program information, we interviewed key Customs officials from the Office of Strategic Trade and Office of Regulations and Rulings. We obtained background material on the Mod Act from these two offices and from the Office of Field Operations and Office of the Chief Counsel. We also obtained and reviewed the background and legislative history of the Mod Act. We obtained numerous documents from the key Customs offices mentioned above, including: The Customs Modernization Act Guidebook; The Trade Compliance Road Map; the U.S. Customs Service Strategic Plan, fiscal years 97—02; U.S. Customs Service Accountability Report, fiscal years 1995—1998; Trade Compliance Measurement Report, fiscal years 1995—1998; Trade Compliance and Enforcement Plan, fiscal years 1995— 1998; and Trade Compliance Risk Management Process. In addition to these background and planning documents, we obtained more specific documents and conducted additional interviews concerning each of the five initiatives as discussed below. To examine Customs’ information programs portion of its informed compliance strategy, we began by reviewing the May 20, 1996, Commissioner’s Informed Compliance Strategy. This document describes the basic and targeted information programs and their components. Using this document as a guide, we analyzed the information that Customs disseminated by various methods, including the Internet and CEBB. We also obtained lists of headquarters-sponsored seminars and other informed compliance outreach activities. To obtain information on informed compliance outreach efforts at the Ports of Seattle and Los Angeles/Long Beach, we interviewed key officials and obtained selected documents. The documents included Seattle Trade Talk newsletter and Port of Los Angeles Public Bulletins. We also obtained lists of seminars and other local outreach efforts. We selected Seattle for review because Customs officials told us that it had been involved in numerous pilot projects concerning implementation of the informed compliance strategy. We selected Los Angeles/Long Beach because of its proximity to the Long Beach Strategic Trade Center, where much of our fieldwork was conducted, and because it is a major port, through which a large volume of imported merchandise enters the United States. To identify the impact that the informed compliance program has had on levels of importer compliance, we obtained and analyzed the Trade Compliance Measurement Reports for fiscal years 1995 to 1998. We interviewed key Customs headquarters officials responsible for the compliance measurement program and discussed program results with them. Because compliance measurement is a process based on physical inspections of merchandise and/or entry summary documentation reviews to determine compliance rates, we assessed the reliability of the data used to make the compliance rate determinations. We interviewed officials from Customs’ Office of Information Technology, which manages ACS, Customs’ primary data collection and import processing system. The officials explained and documented how the data are entered into the system and the uses of the data. We did not verify or validate the data through any data testing, but we did discuss the reliability of the data with Office of Information Technology officials. The officials explained the logic and the different edit checks used to scrutinize the data from the time they are initially entered into the system by importers or brokers, to the time they enter the statistical programs that select merchandise or entry summaries for examination. We assessed these data systems as sufficiently reliable for use in this report. In order to evaluate the statistical sampling methods that Customs used to generate compliance rates, we interviewed statisticians in the Office of Strategic Trade, and we reviewed descriptions of the statistical sampling methodology provided in Customs publications and internal memoranda. Our interviews and examinations of the written materials gave us an understanding of the sampling design and variance estimation procedures used in the sampling plan. However, our review did not include an examination of Customs’ computer software to determine whether the software executed the same procedures that were described to us. We assessed Customs’ statistical sampling methodology as being reasonable and adequate for the purpose of generating compliance rates. To determine the status of Customs’ compliance assessment initiative, we interviewed headquarters officials from the Regulatory Audit Division, the organization that conducts the compliance assessments. We discussed the initiative’s goals and the timeliness of the assessment process. We reviewed pertinent policy and procedure documents, including criteria for selecting importers to receive compliance assessments. We also analyzed data concerning the amount of time it took to complete each assessment, and the number of compliance assessments completed by March 31, 1999. To measure the impact of compliance assessments on importers’ compliance rates, we analyzed data on importer compliance rates for fiscal years 1996 and 1998. These data were for 59 importers on which compliance assessments had been completed by the end of fiscal year 1997 and that had received compliance measurement exams in both years. We obtained and compared compliance rate data for fiscal year 1996, the first year that company-specific compliance data were available; and for fiscal year 1998, the year after all 59 compliance assessments were completed. We analyzed these data to determine whether compliance rates had gone up, gone down, or stayed the same for importers that had received compliance assessments. To determine the status of the account management initiative, we interviewed headquarters officials, including the Director of the National Account Service Center and national and port account coordinators. We inquired about the goals of the initiative, its progress, and whether any factors were hampering progress. We also interviewed a national account manager and six port account team leaders at the Los Angeles International Airport and the Los Angeles/Long Beach Seaport. We selected these facilities because of their proximity to the Long Beach Strategic Trade Center, where much of our fieldwork was conducted, and because they are major ports through which large volumes of merchandise enter the United States. We also reviewed pertinent policies and procedures, including criteria by which Customs selects importers to be assigned account managers. We collected and analyzed data on the number of national and port accounts as of September 30, 1999; the fiscal year each account was first assigned an account manager; and the progress of each selected importer through the account management process as of March 31, 1999. To examine Customs’ actions to address noncompliance, we analyzed two of six options available within informed and enforced compliance that are described in Customs’ Trade Compliance and Risk Management Process— MARC 2000 and CECP. We selected these two programs because they were fully implemented, and the amount of data available for analysis was more concise than for the other options. Time constraints also influenced our selection. We reviewed the fiscal year 1998 MARC 2000 Annual Report and discussed the results with Office of Strategic Trade headquarters officials. We also analyzed MARC 2000 data provided by the Los Angeles Strategic Trade Center and the South Pacific, Mid-America, Gulf, and South Atlantic Customs Management Centers. We also reviewed Trade Compliance Analytical Reviews and Strategic Planning Board minutes, and we analyzed CECP data provided by the Office of Strategic Trade. To determine the views of importers toward Customs’ basic and targeted information, compliance assessment, and account management initiatives, we interviewed nine importers. These importers were judgmentally selected from the population of 30 importers that had (1) a compliance assessment completed by the end of fiscal year 1997 and (2) an account manager assigned by March 10, 1998. We used the cut-off dates to allow sufficient time for the importers to take corrective action, if indicated, after completion of the compliance assessment and for the importers to have at least 1 year of experience with their account managers. We contacted 15 of the 30 importers to request an interview; 9 of the 15 agreed to our interview under conditions of anonymity, to which we agreed. We performed our work between June 1998 and September 1999 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of the Treasury. The Customs Service’s Director of the Office of Planning provided written comments that are discussed at the end of the letter and are reprinted in appendix IV. In addition to the persons named above, James Bancroft, Gretchen Bornhop, Carla Brown, Michael Kassack, Sidney Schwartz, Barry Seltser, Michele Tong, and Bonita Vines made key contributions to this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch- tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Custom Service's Modernization efforts, focusing on: (1) the status of Customs' implementation of the informed compliance strategy; and (2) whether trade compliance under the new program had improved. GAO noted that: (1) compliance data suggest the key initiatives and actions that make up Customs' informed compliance strategy have not yet produced the benefits that were expected; (2) among the reasons for these results may be that Customs has not implemented three of the key initiatives and actions to the extent or at the pace that it had expected; (3) two of the five are fully operational; (4) three have been implemented but have not yet reached many of the intended importers; (5) in responding to noncompliant importers, Customs has had limited success in increasing compliance; (6) its efforts to raise compliance rates in selected industries led to an initial increase in the rates, followed by a decrease, and ended with the fiscal year 1998 compliance rates falling below Customs' goal; (7) Customs cited the lack of sufficient staff resources as a major reason for shortfalls in implementing the compliance assessment and account management programs to the extent or at the pace intended; (8) Customs also noted that as it implemented the compliance measurement system and introduced new analytical tools, staff have become more astute at finding noncompliance; (9) although Customs has monitored and evaluated certain aspects of the key initiatives and actions, it has not evaluated, nor does it have a plan to evaluate, the impact on compliance of the overall informed compliance strategy; (10) however, such an evaluation seems appropriate to address the concerns raised by GAO's analysis of the impact of the compliance assessment initiative on the compliance rates for 59 importers; (11) the overall improvement in these importers' compliance rates after compliance assessment was less than Customs expected; and (12) the limited extent or pace of implementation of some aspects of the strategy and GAO's findings concerning compliance rates for the 59 importers raise fundamental questions about informed compliance strategy.
Approximately 65,000 Medicaid beneficiaries in the five states investigated visited six or more doctors to acquire prescriptions for the same type of controlled substances in the selected states during fiscal years 2006 and 2007. These individuals incurred approximately $63 million in Medicaid costs for these drugs, which act as painkillers, sedatives, and stimulants. In some cases, beneficiaries may have a justifiable reason for receiving prescriptions from multiple medical practitioners, such as visiting specialists or several doctors in the same medical group. However, our analysis of Medicaid claims found at least 400 of them visited 21 to 112 medical practitioners and up to 46 different pharmacies for the same controlled substance. In these situations, Medicaid beneficiaries were likely seeing several medical practitioners to support and disguise their addiction or fraudulently selling their drugs. Our analysis understates the number of instances and dollar amounts involved in the potential abuse related to multiple medical practitioners. First, the total we found does not include related costs associated with obtaining prescriptions, such as visits to the doctor’s office and emergency room. Second, the selected states did not identify the prescriber for many Medicaid claims submitted to CMS. Without such identification, we could not always identify and thus include the number of unique doctors for each beneficiary that received a prescription. Third, our analysis did not focus on all controlled substances, but instead targeted 10 types of the most frequently abused controlled substances. Table 1 shows how many beneficiaries received controlled substances and the number of medical practitioners that prescribed them the same type of drug. We found 65 medical practitioners and pharmacies in the selected states had been barred or excluded from federal health care programs, including Medicaid, when they wrote or filled Medicaid prescriptions for controlled substances during fiscal years 2006 and 2007. Nevertheless, Medicaid approved the claims at a cost of approximately $2.3 million. The offenses that led to their exclusion from federal health programs included Medicaid fraud and illegal diversion of controlled substances. Our analysis understates the total number of excluded providers because the selected states either did not identify the prescribing medical practitioner for many Medicaid claims (i.e., the field was blank) or did not provide the taxpayer identification number for the practitioner, which was necessary to determine if a provider was banned. Our analysis of matching Medicaid claims in the selected states with SSA’s DMF found that controlled substance prescription claims to over 1,800 beneficiaries were filled after they died. Even though the selected state programs stated that beneficiaries were promptly removed from Medicaid following their deaths based on either SSA DMF matches or third party information, these same state programs paid over $200,000 for controlled substances during fiscal years 2006 and 2007 for postdeath controlled substance prescription claims. In addition, our analysis also found that Medicaid paid about $500,000 in Medicaid claims based on controlled substance prescriptions “written” by over 1,200 doctors after they died. The extent to which these claims were paid due to fraud is not known. For example, in the course of our work, we found that certain nursing homes use long-term care pharmacies to fill prescriptions for drugs. One long- term care pharmacy dispensed controlled substances to over 50 beneficiaries after the date of their death because the nursing homes did not notify the pharmacy of their deaths prior to delivery of the drugs. The nursing homes that received the controlled substances, which included morphine, Demerol, and Fentanyl, were not allowed to return them because, according to DEA officials, the Controlled Substances Act of 1970 (CSA) does not permit the return of these drugs. Officials at two selected states said that unused controlled substances at nursing homes represent a waste of Medicaid funds and also pose risk of diversion by nursing home staff. In fact, officials from one state said that the certain nursing homes dispose of these controlled substances by flushing them “down the toilet,” which also poses environmental risks to our water supply. In addition to performing the aggregate-level analysis discussed above, we also performed in-depth investigations for 25 cases of fraudulent or abusive actions related to the prescribing and dispensing of controlled substances through the Medicaid program in the selected states. We have referred certain cases to DEA and the selected states for further investigation. The following provides illustrative detailed information on four cases we investigated: Case 1: The beneficiary used the identity of an individual who was killed in 1980 to receive Medicaid benefits. According to a state Medicaid official, he originally applied for Medicaid assistance in a California county in January 2004. During the application process, the man provided a Social Security card to a county official. When the county verified the Social Security Number (SSN) with SSA, SSA responded that the SSN was not valid. The county enrolled the beneficiary into Medicaid provisionally for 90 days under the condition that the beneficiary resolve the SSN discrepancy with SSA within that time frame. Although the beneficiary never resolved the issue, he remained in the Medicaid program until April 2007. Between 2004 and 2007, the Medicaid program paid over $200,000 in medical services for this beneficiary, including at least $2,870 for controlled substances that he received from the pharmacies. We attempted to locate the beneficiary but could not find him. Case 2: The physician prescribed controlled substances to the beneficiary after she died in February 2006. The physician stated that the beneficiary had been dying of a terminal disease and became unable to come into the office to be examined. The physician stated that in instances where a patient is compliant and needs pain medication, physicians will sometimes prescribe it without requiring an examination. A pharmacy eventually informed the physician that the patient had died and the patient’s spouse had continued to pick up her prescriptions for Methadone, Klonopin, and Xanax after her death. According to the pharmacy staff, the only reason they became aware of the situation was because an acquaintance of the spouse noticed him picking up prescriptions for a wife who had died months ago. The acquaintance informed the pharmacy staff of the situation. They subsequently contacted the prescribing physician. Since this incident, the pharmacy informed us that it has not filled another prescription for the deceased beneficiary. Case 3: A mother with a criminal history and Ritalin addiction used her child as a means to doctor shop for Ritalin and other similar controlled stimulants used to treat attention-deficit/hyperactivity disorder (ADHD). Although the child received overlapping prescriptions of methylphenidate and amphetamine medications during a 2-year period and was banned (along with his mother) from at least three medical practices, the Illinois Medicaid program never placed the beneficiary on a restricted recipient program. Such a move would have restricted the child to a single primary care physician or pharmacy, thus preventing him (and his mother) from doctor shopping. Over the course of 21 months, the Illinois Medicaid program paid for 83 prescriptions of ADHD controlled stimulants for the beneficiary, which totaled approximately 90,000 mg and cost $6,600. Case 4: Claims indicated that a deceased physician “wrote” controlled substance prescriptions for several patients in the Houston area. Upon further analysis, we discovered that the actual prescriptions were signed by a physician assistant who once worked under the supervision of the deceased physician. The pharmacy neglected to update its records and continued filling prescriptions under the name of the deceased prescriber. The physician assistant has never been a DEA registrant. The physician assistant told us that the supervising physicians always signed prescriptions for controlled substances. After informing her that we had copies of several Medicaid prescriptions that the physician assistant had signed for Vicodin and lorazepam, the physician assistant ended the interview. Although states are primarily responsible for the fight against Medicaid fraud and abuse, CMS is responsible for overseeing state fraud and abuse control activities. CMS has provided limited guidance to the states on how to improve the state’s control measures to prevent fraud and abuse of controlled substances in the Medicaid program. Thus, for the five state programs we reviewed, we found different levels of fraud prevention controls. For example, the Omnibus Budget Reconciliation Act (OBRA) of 1990 encourages states to establish a drug utilization review (DUR) program. The main emphasis of the program is to promote patient safety through an increased review and awareness of prescribed drugs. States receive increased federal funding if they design and install a point-of-sale electronic prescription claims management system to interact with their Medicaid Management Information Systems (MMIS), each state’s Medicaid computer system. Each state was given considerable flexibility on how to identify prescription problems, such as therapeutic duplication and overprescribing by providers, and how to use the MMIS system to prevent such problems. The level of screening, if any, states perform varies because CMS does not set minimum requirements for the types of reviews or edits that are to be conducted on controlled substances. Thus, one state required prior approval when ADHD treatments like Ritalin and Adderall are prescribed outside age limitations, while another state had no such controlled substance requirement at the time of our review. Under the Deficit Reduction Act (DRA) of 2005, CMS is required to initiate a Medicaid Integrity Program (MIP) to combat Medicaid fraud, waste, and abuse. DRA requires CMS to enter into contracts with Medicaid Integrity Contractors (MIC) to review provider actions, audit provider claims and identify overpayments, and conduct provider education. To date, CMS has awarded umbrella contracts to several contractors to perform the functions outlined above. According to CMS, these contractors cover 40 states, 5 territories, and the District of Columbia. CMS officials stated that CMS will award task orders to cover the rest of the country by the end of fiscal year 2009. CMS officials stated that MIC audits are currently under way in 19 states. CMS officials stated that most of the MIP reviews will focus on Medicaid providers and that the state Medicaid programs handle beneficiary fraud. Because the Medicaid program covers a full range of health care services and the prescription costs associated with controlled substances are relatively small, the extent to which MICs will focus on controlled substances is likely to be relatively minimal. The selected states did not have a comprehensive fraud prevention framework to prevent fraud and abuse of controlled substances paid for by Medicaid. The establishment of effective fraud prevention controls by the selected states is critical because the very nature of a beneficiary’s medical need—to quickly obtain controlled substances to alleviate pain or treat a serious medical condition—makes the Medicaid program vulnerable to those attempting to obtain money or drugs they are not entitled to receive. Instead of these drugs being used for legitimate purposes, these drugs may be used to support controlled substance addictions and sale of the drugs on the street. As shown in figure 1 below, a well-designed fraud prevention system (which can also be used to prevent waste and abuse) should consist of three crucial elements: (1) preventive controls, (2) detection and monitoring, and (3) investigations and prosecutions. In addition, as shown in figure 1, the organization should also use “lessons learned” from its detection and monitoring controls and investigations and prosecutions to design more effective preventive controls. Preventive Controls: Fraud prevention is the most efficient and effective means to minimize fraud, waste, and abuse. Thus, controls that prevent fraudulent health care providers and individuals from entering the Medicaid program or submitting claims are the most important element in an effective fraud prevention program. Effective fraud prevention controls require that where appropriate, organizations enter into data-sharing arrangements with organizations to perform validation. System edit checks (i.e., built-in electronic controls) are also crucial in identifying and rejecting fraudulent enrollment applications or claims before payments are disbursed. Some of the preventive controls and their limitations that we observed at the selected states include the following. Federal Debarment and Exclusion: Federal regulation requires states to ensure that no payments are made for any items or services furnished, ordered, or prescribed by an individual or entity that has been debarred from federal contracts or excluded from Medicare and Medicaid programs. Officials from all five selected states said that they do not screen prescribing providers or pharmacies against the federal debarment list, also known as the Excluded Parties List System (EPLS). Further, officials from four states said when a pharmacy claim is received, they do not check to see if the prescribing provider was excluded by HHS OIG from participating in the Medicaid program. Drug Utilization Review: As mentioned earlier, states perform drug utilization reviews (DUR) and other controls during the prescription claims process to promote patient safety, reduce costs, and prevent fraud and abuse. The drug utilization reviews include prospective screening and edits for potentially inappropriate drug therapies, such as over-utilization, drug-drug interaction, or therapeutic duplication. In addition, selected states also require health care providers to submit prior authorization forms for certain drug prescriptions because those medications have public health concerns or are considered high risk for fraud and abuse. Each state has developed its DUR differently and some of the differences that we saw from the selected states include the following. Officials from certain states stated that they use the prospective screening (e.g., over-utilization or overlapping controlled substance prescriptions) as an automatic denial of the prescription, while other states generally use the prospective screening as more of an advisory tool for pharmacies. The types of drugs that require prior authorization vary greatly between the selected states. In states where it is used, health care providers may be required to obtain prior authorization if a specific brand name is prescribed (e.g., OxyContin) or if a dosage exceeds a predetermined amount for a therapeutic class of controlled substances (e.g., hypnotics, narcotics). Detection and Monitoring: Even with effective preventive controls, there is risk that fraud and abuse will occur in Medicaid regarding controlled substances. States must continue their efforts to monitor the execution of the prescription program, including periodically matching their beneficiary files to third-party databases to determine continued eligibility, monitor controlled substance prescriptions to identify abuse, and make necessary corrective actions, including the following: Checking Death Files: After enrolling beneficiaries, Medicaid offices in the selected states generally did not periodically compare their information against death records. Increasing the Use of the Restricted Recipient Program: In the course of drug utilization reviews or audits, the State Medicaid offices may identify beneficiaries who have abused or defrauded the Medicaid prescription drug program and restrict them to one health care provider or one pharmacy to receive the prescriptions. This program only applies to those beneficiaries in a fee-for-service arrangement. Thus, a significant portion of the Medicaid recipients (those in managed care programs) for some of the selected states are not subject to this program. Fully Utilizing the Prescription Drug Monitoring Program: Beginning in fiscal year 2002, Congress appropriated funding to the U.S. Department of Justice to support Prescription Drug Monitoring Programs (PDMP). These programs help prevent and detect the diversion and abuse of pharmaceutical controlled substances, particularly at the retail level where no other automated information collection system exists. If used properly, PDMPs are an effective way to identify and prevent diversion of the drugs by health care providers, pharmacies, and patients. Some of the limitations of PDMPs at the selected states include the following: Officials from the five selected states said that physician participation in PDMP is not widespread and not required. In fact, one state did not have a Web-based PDMP; the health care provider has to put in a manual request to the agency to have a controlled substance report generated. No nationwide PDMP exists, and only 33 states had operational prescription drug monitoring programs as of June 2009. According to a selected state official, people would sometimes cross state borders to obtain prescription drugs in a state without a program. Investigations and prosecutions: Another element of a fraud prevention program is the aggressive investigation and prosecution of individuals who defraud the federal government. Prosecuting perpetrators sends the message that the government will not tolerate individuals stealing money and serves as a preventive measure. Schemes identified through investigations and prosecution also can be used to improve the fraud prevention program. The Medicaid Fraud Control Unit (MFCU) serves as the single identifiable entity within state government that investigates and prosecutes health care providers that defraud the Medicaid program. In the course of our investigation however, we found several factors that may limit its effectiveness. Federal regulations generally limit MFCUs from pursuing beneficiary fraud. According to MFCU officials at one selected state, this limitation impedes investigations because agents cannot use the threat of prosecution as leverage to persuade beneficiaries to cooperate in criminal probes of Medicaid providers. In addition, the MFCU officials at this selected state said that this limitation restricts the agency’s ability to investigate organized crime related to controlled substances when the fraud is perpetrated by the beneficiaries. Federal regulations do not permit federal funding for MFCUs to engage in routine computer screening activities that are the usual monitoring function of the Medicaid agency. According to MFCU officials at one selected state, this issue has caused a strained working relationship with the state’s Medicaid OIG, on whom they rely to get claims information. The MFCU official stated that on the basis of fraud trends in other states, they wanted the Medicaid OIG to provide claims information on providers that had similar trends in their state. The Medicaid OIG cited this prohibition on routine computer screening activities when refusing to provide these data. In addition, this MFCU official also stated that its state Medicaid office and its OIG did not promptly incorporate improvements that it suggested pertaining to the abuse of controlled substances. DEA officials stated that although purchases of certain schedules II and III controlled substances by pharmacies are reported to and monitored by DEA, they do not routinely receive information on written or dispensed controlled substance prescriptions. In states with a PDMP, data on dispensed controlled substance prescriptions are collected and maintained by a state agency. In the course of an investigation on the diversion or abuse of controlled substances, information may be requested by DEA from a PDMP. In those states without a PDMP, DEA may obtain controlled substance prescription information during the course of an inspection or investigation from an individual pharmacy’s records. To address the concerns that I have just summarized, we made four recommendations to the Administrator of CMS in establishing an effective fraud prevention system for the Medicaid program. Specifically, we recommended that the Administrator evaluate our findings and consider issuing guidance to the state programs to provide assurance on the following: (1) effective claims processing systems prevent the processing of claims of all prescribing providers and dispensing pharmacies debarred from federal contracts (i.e., EPLS) or excluded from the Medicare and Medicaid programs (LEIE); (2) DUR and restricted recipient program requirements adequately identify and prevent doctor shopping and other abuses of controlled substances; (3) effective claims processing system are in place to periodically identify both duplicate enrollments and deaths of Medicaid beneficiaries and prevent the approval of claims when appropriate; and (4) effective claims processing systems are in place to periodically identify deaths of Medicaid providers and prevent the approval of claims when appropriate. CMS stated that they generally agree with the four recommendations and that it will continue to evaluate its programs and will work to develop methods to address the identified issues found in the accompanying report. Mr. Chairman, this concludes my prepared statement. Thank you for the opportunity to testify before the Subcommittee on some of the issues addressed in our report on continuing indications of fraud and abuse related to controlled substances paid for by Medicaid. I would be happy to answer any questions from you or other members of the Subcommittee. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses (1) continuing indications of fraud and abuse related to controlled substances paid for by Medicaid; (2) specific case study examples of fraudulent, improper, or abusive controlled substance activity; and (3) the effectiveness of internal controls that the federal government and selected states have in place to prevent and detect fraud and abuse related to controlled substances. To identify whether there are continuing indications of fraud and abuse related to controlled substances paid for by Medicaid, we obtained and analyzed Medicaid claims paid in fiscal years 2006 and 2007 from five states: California, Illinois, New York, North Carolina, and Texas. To identify indications of fraud and abuse related to controlled substances paid for by Medicaid, we obtained and analyzed Medicaid prescription claims data for these five states from the Centers for Medicare & Medicaid Services (CMS). To identify other potential fraud and improper payments, we compared the beneficiary and prescriber shown on the Medicaid claims to the Death Master Files (DMF) from the Social Security Administration (SSA) to identify deceased beneficiaries and prescribers. To identify claims that were improperly processed and paid by the Medicaid program because the federal government banned these prescribers and pharmacies from prescribing or dispensing to Medicaid beneficiaries, we compared the Medicaid prescription claims to the exclusion and debarment files from the Department of Health and Human Services Office of Inspector General (HHS OIG) and the General Services Administration (GSA). To develop specific case study examples in selected states, we identified 25 cases that illustrate the types of fraudulent, improper, or abusive controlled substance activity we found in the Medicaid program. To develop these cases, we interviewed pharmacies, prescribers, law enforcement officials, and beneficiaries, as appropriate, and also obtained and reviewed registration and enforcement action reports from the Drug Enforcement Administration (DEA) and HHS. To identify the effectiveness of internal controls that the federal government and selected states have in place to prevent and detect fraud and abuse related to controlled substances, we interviewed Medicaid officials from the selected state offices and CMS. More details on our scope and methodology can be found in our report that we issued today. We conducted this forensic audit from July 2008 to September 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We conducted our related investigative work in accordance with standards prescribed by the Council of the Inspectors General on Integrity and Efficiency (CIGIE). We found 65 medical practitioners and pharmacies in the selected states had been barred or excluded from federal health care programs, including Medicaid, when they wrote or filled Medicaid prescriptions for controlled substances during fiscal years 2006 and 2007. Nevertheless, Medicaid approved the claims at a cost of approximately $2.3 million. The offenses that led to their exclusion from federal health programs included Medicaid fraud and illegal diversion of controlled substances. Our analysis of matching Medicaid claims in the selected states with SSA's DMF found that controlled substance prescription claims to over 1,800 beneficiaries were filled after they died. Even though the selected state programs stated that beneficiaries were promptly removed from Medicaid following their deaths based on either SSA DMF matches or third party information, these same state programs paid over $200,000 for controlled substances during fiscal years 2006 and 2007 for postdeath controlled substance prescription claims. In addition, our analysis also found that Medicaid paid about $500,000 in Medicaid claims based on controlled substance prescriptions "written" by over 1,200 doctors after they died. In addition to performing the aggregate-level analysis discussed above, we also performed in-depth investigations for 25 cases of fraudulent or abusive actions related to the prescribing and dispensing of controlled substances through the Medicaid program in the selected states. We have referred certain cases to DEA and the selected states for further investigation. The selected states did not have a comprehensive fraud prevention framework to prevent fraud and abuse of controlled substances paid for by Medicaid. The establishment of effective fraud prevention controls by the selected states is critical because the very nature of a beneficiary's medical need--to quickly obtain controlled substances to alleviate pain or treat a serious medical condition--makes the Medicaid program vulnerable to those attempting to obtain money or drugs they are not entitled to receive. Fraud prevention is the most efficient and effective means to minimize fraud, waste, and abuse. Thus, controls that prevent fraudulent health care providers and individuals from entering the Medicaid program or submitting claims are the most important element in an effective fraud prevention program. Effective fraud prevention controls require that where appropriate, organizations enter into data-sharing arrangements with organizations to perform validation. System edit checks (i.e., built-in electronic controls) are also crucial in identifying and rejecting fraudulent enrollment applications or claims before payments are disbursed.
Over a period of decades, federal laws and regulations have established a process for the Environmental Protection Agency (EPA) and states to regulate “point sources” of pollution. Point sources are generally municipal and industrial facilities that discharge pollutants via a point, such as a pipe or other conveyance, directly to a body of water. EPA and the states issue permits to these entities to put limits on the types and amounts of pollutants such facilities can discharge. These laws and regulations have helped clean up major water quality problems and reduce the amount of pollutants directly discharged into surface waters. However, many of the nation’s waters are still not meeting water quality standards. For example, toxic algae, such as Pfiesteria piscicida, which are associated with excessive amounts of nutrients (chemical elements such as nitrogen and phosphorus) in waters in Maryland, North Carolina, and Virginia, resulted in millions of fish killed and adverse human health effects. Various pollutants have also resulted in over 2,000 fish consumption advisories and more than 2,500 beach closings and advisories being issued in 1996 alone. Overall, EPA reports that over one-third of the nation’s waters that were assessed by states are still impaired. Nonpoint sources of water pollution, or diffused sources, have been identified as the primary reason for these continued problems. Nonpoint sources of water pollution include a wide array of land-based activities such as timber harvesting, grazing, urban development, and agriculture. Figure 1.1 shows many such nonpoint sources in a watershed setting. Pollution comes from these disparate sources via the process of rainwater, snowmelt, or irrigation water moving over or through land surfaces. This results in pollutants, either dissolved or solid, being transported and eventually deposited into rivers, lakes, and coastal waters or introduced into groundwater. Airborne pollutants, sometimes transported long distances and then deposited in bodies of water, are also considered a source of nonpoint pollution, as is polluted groundwater which discharges into surface water. The types of pollutants vary with the activity involved and include sediment, nutrients, pesticides, pathogens (such as bacteria and viruses), salts, oil, grease, toxic chemicals, and heavy metals. Sediment is a common pollutant from many nonpoint-generating activities and can impact water quality by contaminating drinking water sources or silting in spawning grounds for certain aquatic species. Another common group of nonpoint pollutants, nutrients, can result in excessive plant growth and subsequent decaying organic matter in water that depletes oxygen levels, thereby stressing or killing other aquatic life. Pesticides, pathogens, and other toxic substances associated with runoff from agriculture and other sources can also be hazardous to human health and aquatic life. The severity of any nonpoint impact is dependent on the amount of pollutants actually reaching a body of water and the ability of receiving waters to assimilate or transport those pollutants. Nonpoint source pollution is much more difficult to track than point source pollution. Because the sources are diffused, it is very difficult to pinpoint the exact amount of pollutants coming from individual sources, including that from natural sources of pollution, particularly for pollutants such as sediment that may result from a wide variety of activities and sources. In addition, control practices vary in their effectiveness depending on many site-specific characteristics such as soil type, topography, and climate. As a result, there is much uncertainty in quantifying nonpoint source pollution stemming from specific sources and tracking improvements resulting from control practices. The nature and extent of nonpoint source pollution is essentially a function of the way individuals use the land. Therefore, regulating these activities has been a sensitive issue since land use decisions are largely made at the local level and influenced by state policies. As a result, the Congress has left the actual control and regulation of nonpoint source pollution up to the states while addressing the importance of dealing with the problem in amendments to the Clean Water Act in 1987. Specifically, section 319 of the Clean Water Act, added in 1987, provides a limited federal role in addressing nonpoint pollution. Under this section, EPA provides federal funds and management and technical assistance to states to implement nonpoint source management programs. In their nonpoint source assessments completed in 1989, states identified waters that without additional controls over nonpoint sources, will not meet water quality standards. The states also developed management programs to deal with the problems. In addition, section 6217 of the Coastal Zone Act Reauthorization Amendments of 1990, administered jointly by EPA and the Department of Commerce’s National Oceanic and Atmospheric Administration (NOAA), outlines a more rigorous process for states to deal with nonpoint sources impacting coastal waters. Section 6217 requires states to address significant sources of nonpoint pollution from agriculture, forestry, urban areas, marinas, and hydromodification. This program differs markedly from section 319 in that states are required to include in their programs enforceable policies and mechanisms to ensure that management measures to address these sources are implemented. In addition to section 319’s explicit authorization of a federal role, other agencies are authorized to encourage more environmentally sensitive land use practices. For example, some federal programs use a voluntary cost-share approach with private landowners to encourage improved land use actions, particularly with regard to controlling soil erosion and improving agricultural practices. The Clean Water Act acknowledges that federal agencies are also potential sources of nonpoint pollution via their facilities or activities, or those issued permits or licenses by them, such as grazing and timber harvesting. Therefore, the act includes provisions whereby federal agencies are to ensure that their activities are “consistent” with state nonpoint source pollution management programs. States can judgmentally review certain federal projects and activities to determine whether they conflict with the states’ nonpoint management programs. In accordance with procedures outlined in an executive order regarding intergovernmental review of federal programs, federal agencies are required to consult with the states and make efforts to accommodate their concerns or explain their decisions not to do so. In February 1998, the administration proposed a new plan to address the nation’s remaining water quality problems. Among the “Clean Water Action Plan’s” primary goals are to provide new resources to communities to control nonpoint source pollution, strengthen public health protection, and encourage community-based watershed protection at high-priority areas. The Action Plan also recognizes the role that federal land management agencies must play in protecting the water resources on their lands as well as federal agencies’ roles in providing technical and financial assistance to states and private entities to better deal with nonpoint source pollution. The Chairman, Subcommittee on Water Resources and Environment, House Committee on Transportation and Infrastructure, asked us to (1) provide background information and funding levels for federal programs that primarily address nonpoint source pollution (i.e., those programs identified as either focusing primarily on nonpoint source pollution or that devote at least $10 million annually to the problem); (2) examine the way EPA assesses the overall potential costs of reducing nonpoint source pollution nationwide and alternative methods for doing so; and (3) describe nonpoint source pollution from federal facilities, lands, and activities that federal agencies manage or authorize, or for which they issue permits or licenses. To address the first objective, we surveyed agencies to obtain information on program purpose, key goals and objectives, program funding and staffing levels, matching requirements, and opinions on the potential impact of the Clean Water Action Plan. For relevant Clean Water Act sections, we also included additional questions about how EPA allocates funds across projects, regions, and states. We pretested our survey with officials in the U.S. Department of Agriculture (USDA), EPA, the Fish and Wildlife Service, and the Army Corps of Engineers. In order to identify the most important nonpoint source pollution programs, we asked agencies to respond to our survey for programs meeting at least one of the following two criteria: (1) program expenditures addressing nonpoint source pollution exceeded $10 million for at least 1 year during fiscal years 1994 through 1998 or (2) the program primarily addressed nonpoint source pollution regardless of program expenditures. We sent survey instruments to over 100 programs that we identified through our prior reports and agency background information and discussions with agency officials at EPA; NOAA; and the Departments of Agriculture, Defense, Energy, Interior, and Transportation. The response rate for our survey was 100 percent. For the second objective, we reviewed EPA’s nonpoint source pollution component of the Needs Survey, examining the analytical structure of the models, the reasonableness of key assumptions, and the completeness of data using standard economic and statistical principles. We also interviewed EPA officials and contractor staff responsible for developing and using the models and requested model documentation. We interviewed EPA staff involved with the 1996 report as well as staff working on the report to be issued in 2000. We consulted with experts in water quality modeling from EPA, USDA’s Natural Resources Conservation Service and the Economic Research Service, and Interior’s U.S. Geological Survey. We also reviewed pertinent scientific literature to help identify alternative methodologies for a conceptual framework for estimating nationwide control costs. For the third objective, we identified the primary federal agencies that manage or authorize, or issue permits or licenses for, activities or facilities that result in nonpoint source pollution by interviewing officials at EPA; the Army Corps of Engineers; the Federal Energy Regulatory Commission; and the Departments of Agriculture, Defense, Energy, Interior, and Transportation. We limited our investigation into nonpoint source pollution-generating activities to those that are not regulated under EPA’s point source or stormwater permit requirements. For example, we excluded sources such as construction sites larger than 5 acres or certain industrial activities that must comply with stormwater runoff requirements to address nonpoint source pollution. Because quantitative data on federal agencies’ nonpoint source pollution contribution generally do not exist, we developed an array of other indicators to help characterize agencies’ possible contributions. The primary factors were the extent of agency involvement in nonpoint source-generating activities, the types of impacts that result from the activities, circumstances that may influence the impacts, and management practices that can minimize the impacts. We developed these factors based on a review of scientific research and discussions with federal and state officials. To collect information on the factors, we interviewed a wide array of agency officials, including headquarters program managers, research scientists, and field staff, to understand the range of activities, resulting water quality impacts, and management practices used. We also reviewed scientific literature that described types and ranges of impacts and results of management practices applied for specific nonpoint source pollution-generating activities. We interviewed water quality officials from five states with large portions of federal land—Arizona, California, Colorado, Oregon, and Utah—to understand how federal activities factored into state water quality issues. We judgmentally selected these states from states with at least 25 percent federal land in order to obtain information on the types of nonpoint source pollution associated with a diverse array of federal agencies. In addition, we obtained geographic data from the U.S. Geological Survey describing the percentage of land area owned by the federal government in watersheds across the country. We did not verify the reliability of these data. We conducted our work from February 1998 through January 1999 in accordance with generally accepted government auditing standards. We provided copies of a draft of this report to EPA; the Federal Energy Regulatory Commission (FERC); and the Departments of Agriculture, Commerce, Defense, Interior, and Transportation, for review and comment. Agriculture, Interior, FERC, and NOAA provided written comments. Their comments and our responses are included in appendixes III through VI. EPA provided oral comments and other information which we discuss at the end of chapters 2 and 3. Defense and Transportation had no comments. We also provided relevant sections of the draft report to representatives of each of the five states included in our review to verify statements attributed to them and other information they provided. We made revisions as appropriate to incorporate their comments. As the nation’s lead environmental organization, EPA implements a number of significant programs to deal with nonpoint source pollution. Other federal agencies, however, have also made considerable investments in addressing the problem. USDA funding in particular has eclipsed EPA’s financial commitment by a significant margin. Overall, the seven agencies we surveyed reported obligating about $14 billion for fiscal years 1994 through 1998 on 35 programs addressing nonpoint pollution. Total obligations during this period have been relatively stable—at about $3 billion each year—but obligations at EPA in particular, increased significantly during this period. In February 1998, the administration proposed a plan designed to more effectively address the nation’s remaining water quality problems. The Clean Water Action Plan proposed $568 million in additional funding for fiscal year 1999, and a total increase of $2.3 billion over the 5 years from fiscal years 1999 through 2003. According to the Action Plan, many of its activities will augment programs at EPA and a number of other agencies to deal with nonpoint source pollution. Recognizing the interdisciplinary nature of the problem, the plan also calls for closer cooperation and coordination among these agencies. The 35 federal programs identified by the agencies represent a broad array of activities, reflecting diversity in both the nature of nonpoint source pollution and the remedies needed to address it. Some programs are intended to deal directly with the problem. EPA’s National Nonpoint Source Program, for example, provides financial and technical assistance to help states develop their own nonpoint source management programs and to fund specific projects. Other programs are primarily focused on other objectives but indirectly serve to address specific nonpoint source pollution problems. For example, Interior’s Abandoned Mine Land Program is intended primarily to reclaim abandoned mines for health and safety reasons (e.g., to address dangers such as open mine shafts), but in doing so significantly addresses potentially contaminated stormwater runoff from these facilities. A further distinction among these programs is that some provide financial and technical resources to nonfederal entities to address nonpoint source pollution such as providing resources to farmers to implement certain land management practices, while other programs are focused directly on addressing such pollution on federal land. As figure 2.1 illustrates, USDA dominates federal nonpoint source pollution obligations, with significant financial commitments also made by EPA and Interior. The primary EPA programs that fund nonpoint source pollution control activities include the National Nonpoint Source Program and the Clean Water State Revolving Fund Program (CWSRF). Overall, about $987.2 million was obligated for these programs to address nonpoint source pollution for fiscal years 1994 through 1998. The Drinking Water State Revolving Fund and the Chesapeake Bay programs also address nonpoint source pollution although their portions of funding to do so are significantly smaller than the National Nonpoint Source and CWSRF programs. As requested, we also identified other programs authorized by the Clean Water Act that address nonpoint source pollution in some manner. The four other programs that we identified are focused primarily on objectives other than nonpoint pollution, and consequently, just a small amount of program funding went to nonpoint pollution. Background and funding data on these programs are in appendix I. Figure 2.2 shows the percentage breakdown of total obligations for fiscal years 1994 through 1998 for EPA’s programs. Section 319 of the Clean Water Act established a national nonpoint source program under which states (1) assessed the extent to which nonpoint sources cause water quality problems and (2) developed management programs to address these problems. EPA was charged with reviewing and approving these programs and is authorized to provide grants to states for implementing their activities and programs. Grants have been used for a wide variety of activities, including technical assistance, financial assistance, education, training, technology transfer, and demonstration projects. The funds also support monitoring efforts to assess the success of specific nonpoint source implementation projects. EPA estimated that for fiscal years 1994 through 1998, the agency obligated about $544 million to address nonpoint source pollution, with obligations of $119 million in fiscal year 1998. According to EPA, all states have approved nonpoint source control programs that are helping to reduce nonpoint source loadings, increase public awareness, and improve water quality. While the program’s funding was relatively stable during the 5-year period, its annual funding is significantly higher than it was in prior years. In fiscal year 1990, for example, $38 million was appropriated for the program. EPA uses a formula to allocate the states’ share of the total federal funding appropriated each year for these grants. The formula considers each state’s population, cropland acreage, pasture and rangeland acreage, forest harvest acreage, wellhead protection allotment (the acreage around a groundwater drinking source designated for protection), critical aquatic habitat acreage, mining acreage, and amounts of pesticides applied. The formula also includes a set-aside for Indian tribes. Data used in the formula are obtained from the national census, USDA and EPA data bases, and background reports developed on related topics. EPA’s Clean Water State Revolving Fund Program was established under title VI of the Clean Water Act in 1987 to create, maintain, and coordinate financial programs and partnerships to meet priority community water resource infrastructure needs, primarily those associated with wastewater treatment plants. Under the program, EPA provides grants to capitalize states’ funds. The states, in turn, identify investment priorities allowed by the statute and manage the loan program. As a condition of receiving federal funds, states provide a matching amount equal to 20 percent of the total grant and agree to use the money first to ensure that wastewater treatment facilities are in compliance with deadlines, goals, and requirements of the Clean Water Act (also known as the “first use” requirement). In addition to federal and state matching funds, the revolving fund is also funded by the issuance of bonds, interest earnings, and repayments. According to EPA, federal funding currently accounts for about one-half of total program funding. As loans are repaid, the fund is replenished and loans are made for other eligible projects. All states have met their priority needs and, therefore, may use CWSRF funds to support programs to deal with nonpoint source pollution and protect their estuaries. We reported in 1991 that only two states were using their CWSRF funds to support nonpoint source pollution projects.Since then, however, states’ reliance on the CWSRF to fund nonpoint pollution-related activities has grown considerably. According to EPA, 18 states currently use their CWSRFs for this purpose. EPA is encouraging states to use CWSRF funds for nonpoint source control and has set a goal to have 30 states doing so by the end of the decade. Other EPA goals for increasing CWSRF emphasis on nonpoint pollution include ensuring that CWSRF funding decisions are made in a manner that enables states to direct funds based on environmental priorities—whether they be point or nonpoint in nature. Such a strategy could be expected to place increasing emphasis on addressing nonpoint pollution because most remaining water quality problems are attributed to nonpoint sources. EPA has set a goal for 15 states to be doing so by 1999. In addition, over the next 3 years, EPA plans to increase the number and dollar amount of CWSRF loans annually for polluted runoff control to 10 percent of all CWSRF funds loaned. Figures provided by EPA show that federal CWSRF funds devoted to nonpoint source pollution has increased significantly in recent years. For example, figure 2.3 shows that funding for nonpoint source pollution increased about 380 percent for fiscal year 1994 through fiscal year 1995.EPA estimates that about $442.8 million of the $7.1 billion appropriated to the program was devoted to addressing nonpoint pollution for the 5 fiscal years included in our study. Federal CWSRF funds to address nonpoint source pollution in fiscal year 1998 was estimated at $96.3 million. According to EPA, it uses percentages provided by the Congress to allocate funds to states after setting aside 1/2 percent of appropriated funds for Indian tribes for wastewater treatment purposes. The basis for state percentages include population and documented wastewater treatment needs. In addition, 1 percent or $100,000 (whichever is greater) is deducted from each state’s allotment for planning purposes—as required by section 604(b) of the Clean Water Act. The Drinking Water State Revolving Fund Program (DWSRF) was established by Congress under the Safe Drinking Water Act Amendments of 1996 to help public water systems make infrastructure improvements in order to comply with national primary drinking water standards and to protect public health. Funds are distributed among states in accordance with an allotment formula, with the condition that each state receive a minimum of 1 percent of the funds available for allotment. The allotment formula used for fiscal year 1998 reflects the needs identified in the most recent Drinking Water Infrastructure Needs Survey, the first of which was released in January 1997. States are required to describe the use of funds awarded to them in a plan that is distributed to the public for review and comment. Fiscal year 1997 was the first year for DWSRF appropriations and the program received $1.275 billion; $725 million was appropriated in fiscal year 1998. Under the DWSRF Program, states can use federal capitalization grant money awarded to them to set up an infrastructure funding account from which loans are made available to public water systems. In addition to authorizing the infrastructure fund, the Congress placed a strong new emphasis on preventing contamination problems through source water protection and enhanced water systems management. States have the flexibility to set aside up to 31 percent of their capitalization grant to develop and implement programs that encourage better drinking water systems operation to ensure a safer supply of water for the public. The four broad set-aside categories for which a state can choose to reserve funds are (1) administrative and technical assistance (up to 4 percent), (2) state program management (up to 10 percent and must be matched dollar for dollar), (3) small systems technical assistance (up to 2 percent), and (4) local assistance and other state programs (up to 15 percent and includes primarily activities devoted to protecting drinking water sources from contamination). According to EPA, states reserved approximately 21 percent of the fiscal year 1997 appropriation to fund set-aside activities. The local assistance and other state set-asides contain several nonpoint source-related activities. For example, source water protection activities, such as purchasing land as easements to reduce the likelihood of ground water contamination, can help reduce the generation of nonpoint source pollutants. In addition, in fiscal year 1997, states could use this set aside to conduct source water delineations and assessments. These activities identify the areas around groundwater drinking water sources that must be protected to avoid contamination and the possible sources of contamination. EPA reported that 100 percent of the funds obligated for these activities, $111.8 million, should be considered as addressing nonpoint source pollution. In addition to providing funding to delineate and assess source water protection areas, the set-asides made available by the DWSRF Program provide states with funds to implement protection measures. These protection measures can address all sources of contamination, which may include nonpoint sources. EPA reports that the state program management and local assistance and other state programs set-asides are the ones most likely to be used for nonpoint source-related activities and can fund activities such as education, loans to public water systems for the purchase of land easements, and community tree planting. The Chesapeake Bay Program, authorized by section 117 of the Clean Water Act, is a unique regional partnership involving many different constituencies, including federal, state and local agencies; environmental groups; a citizens advisory group; and academia. The program has been directing and conducting the restoration of the Chesapeake Bay since 1983 and is focusing heavily on reducing levels of nitrogen and phosphorus, which are key pollutants responsible for degrading aquatic habitat and the Bay’s productivity. EPA estimates that about $52 million was obligated to address nonpoint source pollution out of $101.4 million total program appropriations for fiscal years 1994 through 1998. EPA uses a formula to allocate about one-half of appropriated funds to the key states in the Chesapeake Bay watershed—Virginia (30 percent), Maryland (30 percent), Pennsylvania (30 percent), and the District of Columbia (10 percent). States must match federal funds dollar for dollar. Funds may be used for various activities such as (1) educating selected audiences on the importance of reducing nonpoint source pollution, (2) preventing excessive livestock contact with streams to reduce streambank erosion and direct nutrient loadings, and (3) monitoring and tracking reduction of point source nutrient loads. A competitive process is used to allocate remaining program funds to specific projects. A number of other EPA programs authorized by the Clean Water Act address nonpoint source pollution although not necessarily as a direct program objective. These include the National Wetlands Program (section 104(b)(3)); the Water Pollution Control, State and Interstate Program Support Program (section 106); the Clean Lakes Program (section 314); and the National Estuary Program (section 320). These programs accounted for $3.9 million in nonpoint-related obligations for fiscal years 1994 through 1998 and are discussed in appendix I. In the late 1980s and early 1990s, USDA began taking a dramatic shift in emphasis on water quality issues because of adverse impacts of agricultural production on water quality. In prior years, USDA’s water quality activities were limited in scope. In 1992, for example, we reported that a small percentage of USDA funds were going to water quality activities—about $62.5 million in fiscal year 1991 of $1.7 billion appropriated for 10 cost-share programs. In contrast, as shown in figure 2.4, USDA reported that the Conservation Reserve and the Environmental Quality Incentives Programs devoted almost $2 billion to nonpoint source pollution-related activities in fiscal year 1998. By far, USDA’s largest source of funding for nonpoint pollution activities is the Conservation Reserve Program, which accounted for about 65 percent of all the federal funds identified in this report obligated to address nonpoint source pollution for fiscal years 1994 through 1998. The program was established in 1985 and has several objectives: reduce water and wind erosion, protect the nation’s long-term capability to produce food and fiber, reduce sedimentation, improve water quality, create and enhance wildlife habitat, and encourage more permanent conservation practices.The program encourages private land owners, such as farmers, to remove highly erodible cropland or other environmentally sensitive acreage from production and apply conservation measures to reduce and control erosion and water quality impacts. USDA provides farmers with an annual rental payment for the term of a multiyear contract for taking the land out of production and cost-sharing benefits to apply the necessary conservation measures. Land may be enrolled in the Conservation Reserve Program by three means: (1) a general signup, which competitively selects the most environmentally sensitive land (most land is enrolled into the program by this method); (2) a continuous noncompetitive signup of highly desirable environmental practices such as filter strips (areas of grass or other vegetation that filter runoff by trapping sediment, pesticides, and other pollutants) and riparian buffers (areas of trees and/or shrubs next to ponds, lakes, and streams that filter pollutants from runoff as well as provide shade, food sources, and shelter for fish and other wildlife); and (3) the Conservation Reserve Enhancement Program, which combines the resources of the federal and state governments to address targeted environmental concerns—such as the Chesapeake Bay. As of October 1998, there were about 30 million acres enrolled in the Conservation Reserve Program. According to USDA’s response to our survey, while the Conservation Reserve Program has no specific nonpoint source objectives, “multiple, indistinguishable benefits for water quality, wildlife habitat, air quality, and erosion control are achieved from all acreage enrolled in CRP.” For this reason, USDA officials explained that 100 percent of the Conservation Reserve Program funds should be considered as addressing nonpoint source pollution because all activities carried out under the program involve land use practices that help reduce nonpoint pollution. This amounted to approximately $9.2 billion for fiscal years 1994 through 1998. Program funding in fiscal year 1998 was estimated at $1.7 billion. USDA’s Environmental Quality Incentives Program (EQIP) was created by the Federal Agriculture Improvement and Reform Act of 1996 and combined several existing conservation programs—the Agricultural Conservation Program (which includes Water Quality Incentives Projects), the Colorado River Salinity Control Program, and the Great Plains Conservation Program—into a single program. The program provides flexible technical, financial, and educational assistance to private land owners, such as farmers and ranchers, who face serious threats to soil, water, and related natural resources on their land, including grazing land, wetland, forest land, and wildlife habitat. This program provides cost-share assistance for up to 75 percent of the cost of certain conservation practices such as filter strips, manure management facilities, and wildlife habitat improvement. The primary difference between this program and the Conservation Reserve Program is that farmers do not retire land from production under EQIP. Instead, farmers implement practices that minimize water quality impacts that allow them to continue to use the land; and, unlike the Conservation Reserve Program, EQIP provides cost-share assistance and incentive payments that can be made for up to 3 years to encourage producers to perform land management practices such as nutrient, manure, and integrated pest management. The Conservation Reserve Program, on the other hand, provides annual rental payments for the land taken out of production and focuses on cropland and marginal pasture land while EQIP focuses on a broader range of land uses. According to USDA, the agency obligated approximately $642 million under this program for fiscal years 1996 through 1998. The agency said that all of the funds addressed nonpoint source pollution, noting that EQIP is intended to solely address nonpoint source pollution from farms and ranches. Program funding to address nonpoint source pollution in fiscal year 1998 was estimated at $232 million. USDA identified 12 additional programs that address nonpoint source pollution. The environmental objectives of the programs vary, ranging from improving scientific understanding of the nature of the problem to direct efforts to reduce nonpoint pollution. The National Research Initiative Competitive Grants Program, for example, provides grants to increase the amount and the quality of science applied to the needs of agriculture and forestry. From fiscal years 1994 through 1998, USDA estimated that about $28.8 million of the $456.3 million total appropriated program funding (plus full time equivalents) was obligated to address nonpoint source pollution, with about $5.2 million obligated in fiscal year 1998. The Watershed Protection and Flood Prevention Program works with state and local entities in planning and implementing watershed improvement projects, such as promoting soil conservation or improving flood prevention. USDA reported that almost 1,000 watershed projects receive funding. In the past 5 fiscal years, this program has obligated about $433 million to address nonpoint source pollution. Other USDA programs address such diverse objectives as measuring the impact of farming systems on water quality, providing educational and technical assistance programs for voluntary adoption of improved management practices to enhance or protect water quality, and enhancing wildlife habitat. Overall, these 12 additional USDA programs accounted for $1.7 billion of the estimated $11.5 billion USDA obligated to address nonpoint source pollution during the 5-year period. These programs are discussed in appendix II. In addition, the Forest Service noted that a portion of its budget supports controlling nonpoint source pollution, but the agency does not track it in a way that can be reported. Within the Department of the Interior, programs related to nonpoint source pollution include those administered by the Bureau of Land Management, the Bureau of Reclamation, the U.S. Geological Survey, the U.S. Fish and Wildlife Service, and the Office of Surface Mining Reclamation and Enforcement. These agencies are involved in water quality efforts because of their primary responsibilities, which include ensuring adequate supplies of water for drinking and agricultural purposes within arid locations of the United States, protecting endangered and other trust species and wildlife habitat, and reclaiming resources impaired by mining activities. Abandoned Mine Land Program Among Interior’s programs, the Office of Surface Mining Reclamation and Enforcement’s Abandoned Mine Land (AML) Program provides the greatest financial contribution toward addressing nonpoint source pollution, accounting for nearly 45 percent of Interior’s obligations in the past 5 fiscal years. Created by the Surface Mining Control and Reclamation Act of 1977, this program—mostly run by states with approved programs—restores and reclaims coal mine sites that were abandoned or left inadequately reclaimed before August 3, 1977. Surface mining causes land disturbances that may result in erosion and exposes minerals that can leach toxic chemicals, if left inadequately reclaimed. While the act was set up to specifically deal with coal mine reclamation, states can use funds to clean up abandoned noncoal sites if all their abandoned coal sites have been completed. Interior collects fees from all active coal mining operations on a per-ton-of-coal-mined basis, which are deposited into an interest bearing Abandoned Mine Reclamation Fund. Expenditures from the fund are authorized through the regular congressional budgetary and appropriations process, and are used to pay the costs of AML reclamation projects. Realizing that coal fees would not generate the revenue needed to address every potential eligible site, the Congress provided the states and Indian tribes with the flexibility to decide which projects to fund. The act specifies that 50 percent of the reclamation fees collected in each state and Indian tribe with an approved reclamation program be allocated to that state or tribe for use in its reclamation program. Interior uses the remaining 50 percent for purposes such as funding emergency and high-priority projects in states and Indian tribes without approved AML programs, funding a federal abandoned mine program in USDA, and providing financial assistance to small coal operators (who produce less than 300,000 tons of coal annually). According to agency officials in the Division of Reclamation Support, about 90 percent of total program funds addressed nonpoint source pollution problems. For fiscal years 1994 through 1998, this amounted to approximately $626.3 million, or about $125 million each year. Interior identified 13 other programs that address nonpoint source pollution. Environmental objectives for these programs vary from efforts to directly control nonpoint pollution to efforts that indirectly control the problem. For example, the Fish and Wildlife Service’s Clean Vessel Act Pumpout Grant Program directly addresses nonpoint source pollution by significantly reducing the amount of sewage discharged from boats. According to the Service, for fiscal years 1994 through 1998, $40 million was awarded in grants to states to fund the installation of pumpout and dump stations for land-based disposal of vessel sewage. On the other hand, the Fish and Wildlife Service’s Partners for Fish and Wildlife Program indirectly addresses nonpoint source pollution by restoring habitat such as providing native, diverse riparian habitat (areas alongside rivers, lakes, and ponds) for certain migratory birds and aquatic species. These efforts help reduce nonpoint pollution by providing vegetation along bodies of water, which helps slow stormwater runoff and trap pollutants such as sediments and nutrients. In addition, several Bureau of Land Management programs obligate funds that address nonpoint source pollution on federal lands through a variety of objectives, such as enhancing riparian habitat and managing rangelands to protect water quality. Other program objectives include controlling salinity in the Colorado River and recording long-term spatial and temporal trends in atmospheric deposition. The remaining 13 programs accounted for about $810.7 million of Interior’s total estimated $1.4 billion obligated to address nonpoint source pollution over the past 5 fiscal years. These programs are discussed in appendix II. In addition to the EPA, USDA, and Interior programs, a few other programs were identified at the Departments of Commerce and Defense that target nonpoint source pollution problems either directly or indirectly. These programs accounted for a very small portion, less than 1 percent, of overall federal obligations on nonpoint source pollution for fiscal years 1994 through 1998. In addition, some agencies such as those at the Departments of Defense and Transportation spend significant funds to control certain classes of nonpoint source pollution that are regulated under EPA’s stormwater permit program that also address other nonpoint sources in the process. However, these expenditures were not captured in our review. One program, administered by NOAA, is the Coastal Zone Management Program created under the Coastal Zone Management Act of 1972. The program is a voluntary partnership between the federal government and U.S. coastal states and territories that is intended to preserve, protect, develop, and where possible, restore and enhance the nation’s coastal resources. The statute also encourages the preparation of special area management plans that specify how significant natural resources are to be protected and promote reasonable coastal economic growth, improved protection of life and property in hazardous areas, and improved predictability in government decision making. NOAA estimated that of the $229 million total appropriated funding, it obligated approximately $23.8 million (including full time equivalents) for fiscal years 1994 through 1998 to address nonpoint source-related problems. A second program, co-administered by NOAA and EPA, is the Coastal Nonpoint Pollution Control Program, authorized by section 6217 of the Coastal Zone Act Reauthorization Amendments of 1990. The amendments require states and territories to develop and implement coastal nonpoint pollution control programs. Once approved, these programs are to be implemented through changes to the state nonpoint source program approved by EPA under section 319 of the Clean Water Act and through changes to the state coastal zone management program. To help states develop their programs, EPA published management measures for several categories of nonpoint pollution sources, such as agriculture, urban, forestry, marinas, and hydromodification, that lay out possible controls for reducing pollution from these sources. NOAA estimated that it obligated 100 percent of appropriated funds (plus full time equivalents)—$12 million for fiscal years 1994 through 1998—to address nonpoint source pollution. The Department of the Army reported that its Integrated Training Area Management Program integrates Army training and other mission requirements for land use with natural resource management practices at Army installations used for training programs. The practices are directed at repairing existing damage to land and preventing future environmental compliance problems. The program provides a process for surveying and monitoring natural resource conditions, integrating training requirements with land condition status, and rehabilitating and repairing damaged areas. The program also provides environmental awareness training. For fiscal years 1996 through 1998, Army officials estimated that $50.4 million of the $95.1 million in total appropriated funding was obligated to address nonpoint source pollution. Defense officials noted that the Department spends the necessary resources addressing stormwater runoff from its facilities. While many of these activities respond to specific industrial stormwater permit requirements such as controlling runoff from an aircraft maintenance facility, the officials told us that they often also address other nonpoint sources as well. For example, Defense officials told us that in dealing with a stormwater permit requirement (which may include preventing pollutants from entering into a waterway or municipal stormwater system), they will often incorporate runoff from nearby areas that would have otherwise remained as an uncontrolled nonpoint source. This consolidates stormwater runoff and helps reduce the volume of uncontrolled runoff from these facilities. Defense did not report obligations for projects such as this, however, since funds to address nonpoint pollution were combined with stormwater permit requirements and could not be separated easily. Similarly, a significant amount of the Department of Transportation’s funding is devoted to minimizing the impacts from highway construction and operation through the Surface Transportation Fund. For example, Transportation reported that about $288 million of these funds were obligated in fiscal year 1998 to address stormwater runoff. However, the majority of these funds were identified as primarily addressing runoff from road and highway construction projects that must meet stormwater permit requirements and thus, are not discussed in this report. Some funds are eligible for specific nonpoint control projects such as retrofitting roads with detention ponds or vegetated buffers to better deal with runoff and minimize water quality impacts. A Transportation official reported that expenditures for these types of projects probably did not exceed our $10 million threshold and like the Department of Defense would be difficult to separate out from other program obligations. In October 1997, the Vice-President directed EPA and USDA to work with other federal agencies and the public to develop a Clean Water Action Plan. The plan, issued in February 1998, acknowledged the progress that had been made in past decades by focusing largely on point sources of pollution, but maintained that additional steps—and a more holistic approach—were needed to improve progress toward achieving the nation’s water quality goals. Specifically, the plan emphasizes the need to identify and address the major pollution sources affecting entire watersheds, whether they be from point sources, nonpoint sources, or a combination of the two. The plan proposes an increase in federal water quality spending of over $2.3 billion during the next 5 fiscal years. The plan also proposes to focus federal dollars on priority problems by increasing coordination among the many federal agencies involved in this issue. The plan recognizes the increased importance of nonpoint source pollution in explaining the problems affecting many watersheds, noting that “polluted runoff is the greatest source of water quality problems in the nation today.” Accordingly, much of the plan, and a significant portion of funding under the plan, focuses on this problem. The Congress appropriated full funding of EPA’s proposed increases under the Action Plan. Of particular note, the plan nearly doubles the size of the state grants provided under EPA’s National Nonpoint Source Program from its fiscal year 1998 funding of $105 million to $200 million in fiscal year 1999. However, not all agencies received funding increases. For example, the plan proposed increasing the funding for USDA’s Environmental Quality Incentives Program by 50 percent, from $200 million in fiscal year 1998 to $300 million in fiscal year 1999. Instead, the fiscal year 1999 budget decreased the funding by $26 million, to $174 million in fiscal year 1999. Also, the plan proposed an increase of $36 million for the Army Corps of Engineers, but none of these additional funds were appropriated. The Department of Agriculture’s Natural Resources Conservation Service (NRCS) and Agricultural Research Service (ARS) each noted the omission of certain programs in this chapter. Specifically, NRCS cited the Wetlands Reserve Program and the Forestry Incentives Program, and ARS cited certain research activities as programs that should be added. We included programs in this chapter and appendix II based on information we received from agency officials who were asked to identify programs that addressed nonpoint source pollution meeting our criteria (e.g., programs that primarily focused on nonpoint source pollution or programs that spent at least $10 million a year addressing nonpoint source pollution regardless of program focus). We added information provided by USDA on the Wetland Reserve Program and ARS’ Water Quality/Research, Development, and Information Program in appendix II. We did not include information on the Forestry Incentives Program because program and funding data were not provided. Interior’s Office of Surface Mining also commented on this chapter. The office said that while it did not disagree with the data presented, it could not verify the estimate of percent of resources going to nonpoint source pollution for the AML Program. The data we reported were obtained from the agency’s response to our survey on the program and subsequent information provided by the Division of Reclamation Support. We clarified this point by providing specific attribution to the information in the report. EPA indicated that the information in this chapter was generally accurate, but officials with the agency’s CWSRF Program questioned the nonpoint source pollution funding totals attributed to that program. The officials cited in particular, the complexity of isolating the federal portion of the funds included in the program because these funds are commingled with state matching funds and funds from other sources. Supplemental information provided by these officials led to a revised estimate, which we incorporated in the report. The Clean Water Act requires EPA to report periodically to the Congress an estimate of the costs of carrying out the provisions of the act. In addressing this requirement, EPA reported in 1997 that the nationwide cost of controlling selected sources of nonpoint source pollution would be $9.4 billion (in 1996 dollars). The estimate represents the capital costs that farmers and others might incur in applying best management practices and other measures to control run off from agriculture, silviculture, and certain animal feeding operations. Although EPA’s study represents one of the few attempts to estimate control costs nationwide, EPA officials acknowledge that their methodology has several limitations. Specifically, the methodology (1) does not include some potentially significant nonpoint sources of pollution and (2) includes capital costs associated with best management practices to address nonpoint source pollution but does not include the potentially significant costs of operating and maintaining these practices in subsequent years. EPA officials told us they are considering an additional approach to estimate nonpoint source control needs. Of particular note, the officials said that they are considering whether to develop a “watershed-based approach” that could better take into account the unique characteristics of individual watersheds. Such an approach would likely provide a more realistic estimate of the nation’s nonpoint source pollution control needs. The officials noted, however, that resource shortages were constraining the effort. Under the Clean Water Act, EPA is required to report to the Congress every 2 years on the estimated cost of carrying out the provisions of the act. Historically, EPA’s report, known as the Clean Water Needs Survey, has focused on estimating the costs of construction, or capital costs, of all needed publicly owned treatment works (e.g., waste water treatment plants) which are funded under the CWSRF. However, as reported in chapter 2, with increased emphasis on nonpoint source pollution, states are able to use CWSRF funds for nonpoint source control projects. As a result, EPA began also estimating the capital costs associated with controlling several types of nonpoint sources of pollution. According to EPA, the report, in addition to informing the Congress on water project needs, can help the states and EPA plan how they will attain and maintain Clean Water Act goals by giving them a comprehensive picture of the projects and other activities necessary to meet water quality standards. To estimate wastewater treatment needs, EPA has relied on the states to document their capital needs. Because few states had systematically documented their nonpoint source control needs, however, EPA had to develop a methodology for estimating the capital costs to control nonpoint source pollution nationwide. The methodology estimates (1) the number of possible nonpoint sources for three categories of sources—agriculture, silviculture, and animal feeding operations— and (2) the cost of applying best management practices to those sources. EPA estimated just the capital costs associated with these sources. The annual costs that might be required to operate and maintain the practices are not included. To estimate the cost of controlling soil erosion associated with agricultural activities, EPA used data from USDA’s 1992 National Resources Inventory database to identify agricultural lands within each state requiring erosion control. The database, which is compiled by USDA every 5 years, includes information on farming activity, soil erosion, and current soil conservation practices for a sample of acres within each state. On those agricultural lands requiring erosion control, EPA assumed best management practices would be applied to reduce erosion, with the least costly measure selected first. In addition to the best management practices, EPA assumed that farmers would develop water quality management plans to help them manage the application of fertilizers and pesticides that can also run off and cause water quality problems. The capital costs associated with applying both the conservation measures and developing the water quality management plans were aggregated by state, and a nationwide cost estimate was calculated. Nationwide costs for controlling agricultural nonpoint pollution were estimated to be $3.8 billion in 1996. Similarly, to model the needs for silviculture, EPA estimated the capital costs associated with applying best management practices on harvested sites on privately owned forest lands in the United States using data from USDA’s 1992 Forestry Resources of the United States. Federal lands were not considered because these lands are not eligible for funding under CWSRF. EPA used information from its 1992 economic analysis of the Coastal Zone Act Reauthorization Amendments of 1990 (CZARA) to identify best management practices that could be applied to forest lands. These practices included controlling erosion from timber access roads, stabilizing streambanks near harvest sites, and ensuring re-vegetation of harvested sites. The capital costs associated with implementing the best management practices were aggregated by state, and a nationwide estimate was derived by adding the state values. Overall, EPA estimated that the capital costs associated with controlling runoff from silvicultural activities on private forest lands nationwide would be about $3.5 billion in 1996. To model the needs associated with controlling animal waste runoff from animal feeding operations, EPA estimated the number of operations in each state using data from USDA’s 1992 Census of Agriculture. EPA assumed that each feeding operation would require a nonpoint source management plan for reducing contaminated runoff, and that none of the existing feedlots had any best management control practices already in place. The estimated cost of developing the nonpoint source management plan and the cost of implementing best management practices to reduce runoff represent the cost of controlling nonpoint source pollution at these sites. Overall, EPA estimated that the cost of controlling runoff from these feeding operations nationwide was about $2.1 billion in 1996. As depicted in table 3.1, EPA’s estimate of $9.4 billion for controlling nonpoint source pollution represents the sum of the costs for the three categories of nonpoint sources. The 1996 estimate represents a slight decrease from the 1992 estimate of $10 billion, primarily reflecting, according to EPA, a decline in the number of animal feeding operations. EPA officials acknowledge that their methodology has several limitations, including the omission of (1) the cost of controlling runoff associated with other potentially significant sources of nonpoint source pollution such as abandoned mines and (2) the cost of operating and maintaining the best management practices implemented to control pollution. In addition, the methodology does not assess and disclose a range of uncertainty associated with its single-point control cost estimate, and does not include sufficient documentation of its cost-estimation methodology so that reviewers could compare its underlying assumptions and data with published sources (and thereby more easily assess the reasonableness of its results). As EPA acknowledges in its 1996 Clean Water Needs Survey report, the methodology considers only selected sources of nonpoint source pollution—agriculture, silviculture, and animal feeding operations. Many other sources of nonpoint pollution contribute to water pollution and therefore may require some controls in order to meet Clean Water Act goals. These sources include abandoned mines, atmospheric deposition, hydromodification, and marinas and urban areas not required to have a stormwater permit. In addition, federally authorized activities on federal lands such as silvicultural operations are not included since they are not eligible for CWSRF funds. As a result, only a portion of the total costs that would be associated with controlling nonpoint source pollution nationwide are included. Other studies indicate that runoff from other sources can be significant. For example, in its 1994 analysis of President Clinton’s Clean Water Initiative, EPA estimated that there were 15,000 to 50,000 abandoned mine sites on federal lands causing water quality problems. The estimated cost to remediate these sites ranged from $330 million to $1.1 billion per year, in 1993 dollars ($354 million to $1.2 billion in inflation-adjusted 1996 dollars). Furthermore, data aggregated by the Office of Surface Mining from state estimates show that abandoned mines on private lands would cost a total of an additional $2.6 billion to reclaim. EPA officials stated that other categories of nonpoint sources were not included because of a lack of nationwide information. EPA also acknowledged that its methodology does not account for the annual operating and maintenance (O&M) costs that farmers and others might incur in implementing best management practices and other management measures to control erosion. As a result, only a portion of the total cost that might be associated with implementing best management practices is accounted for. In developing cost estimates for controlling runoff from croplands, for example, EPA assumed that farmers would develop water quality management plans to help them manage the application of fertilizers on their fields. The capital costs farmers would incur to develop these plans are included in EPA’s cost estimate. However, farmers might also incur annual costs such as those associated with testing the soil to determine whether they are meeting the goals of the management plan. EPA has omitted operating and maintenance costs because the Needs Survey has historically been focused on projects that can be funded under CWSRF, and O&M costs are not eligible for these funds. However, EPA officials acknowledge that they are not limited to including just capital costs in their report, and that accounting for O&M would (1) provide a more complete picture of the nation’s needs for controlling nonpoint source pollution and (2) make the Needs Survey a more useful tool for EPA and the states in planning how they will attain and maintain Clean Water Act goals. EPA officials told us that they will allow states to report nonpoint source control O&M costs, but that the Needs Survey will continue to report only the capital costs eligible for CWSRF funding. In developing the cost estimates, EPA did not fully assess the uncertainty that is associated with the underlying assumptions and data used in the analysis. Accordingly, EPA’s 1996 Clean Water Needs Survey report presents the control costs for each source category as single point estimates. Such a presentation, however, implies a level of precision that may not be warranted given the limited information behind the data and assumptions. EPA officials acknowledge that the $9.4 billion cost estimate is subject to a range of uncertainty although they did not calculate it. In other studies, EPA has assessed uncertainty and presented its estimates as a range of values. For example, in its 1992 economic assessment of management measures developed in accordance with the CZARA, EPA estimated that the cost of controlling nonpoint source pollution in coastal areas throughout the United States would range from about $390 million to $591 million per year, in 1992 dollars (about $449 million to $681 million in 1996 inflation-adjusted dollars). In addition, in its 1994 economic assessment of President Clinton’s 1994 Clean Water Initiative, EPA estimated that the costs associated with implementing nonpoint management programs on agricultural lands across the United States would range from about $595 million to $985 million per year, in 1993 dollars (from about $638 million to $1.1 billion in 1996 inflation-adjusted dollars). We found it difficult to thoroughly evaluate EPA’s methodology because it did not fully document the key assumptions and data used in its analysis. Consequently, we were unable to compare these assumptions and data with published sources to assess their reasonableness. For example, to estimate the cost of erosion control on cropland acres, EPA used estimates of the cost of applying various soil conservation practices. According to EPA officials, the cost data were obtained from USDA’s Fiscal Year Statistical Summaries (1989-1995). Without documentation, however, we could not verify that the data were obtained from the publications cited, or whether they are reasonable in comparison to other published sources. Addressing the limitations mentioned previously can improve EPA’s cost estimation methodology and resulting cost estimate, but the agency is also considering an additional approach that would take into account the unique characteristics of individual watersheds. Agency officials indicated, however, that the added cost of this “watershed-based approach” could constrain such an effort. A USDA official involved in similar work suggests that improved coordination between EPA and this agency could help advance EPA’s effort. EPA’s current methodology relies primarily on data collected on a countywide or statewide basis—data that were collected along political boundaries rather than watershed boundaries. The practical effect of this limitation is that the effects of the unique characteristics of individual watersheds are not taken into account in estimating either pollution levels or the costs of controlling them. For example, to estimate nonpoint source runoff from croplands, EPA used information on soil erosion and productivity to estimate soil runoff from croplands within each state.However, this may not accurately represent the soil that actually enters a waterbody because it measures soil runoff only to the edge of the farm field, and not whether a water quality problem exists. The extent to which soil runoff actually enters a body of water and impairs water quality can vary across watersheds, depending on factors like the proximity of land use activities to a waterbody, soil type, slope, the duration and intensity of rainfall, vegetative cover, and the environmental sensitivity of the water resource. EPA’s methodology does not take these factors into account and essentially results in estimating costs to apply best management practices to agricultural activities that result in soil runoff, rather than on activities that explicitly affect water quality. In contrast, a watershed-based approach allows the consideration of unique characteristics of watersheds that influence the extent to which runoff from a field or other source enters a waterbody or underlying aquifer and impairs water quality. According to EPA, such an approach can also develop information that can help states plan more cost-effective water pollution control strategies. In its 1996 Clean Water Needs Survey report to the Congress, EPA stated that, reporting needs on a watershed basis would enable states “to assess both the point and nonpoint pollution sources in the watershed, and to address them in the most cost-effective way.” EPA officials told us that a significant barrier impeding the use of a watershed-based approach is the additional resources the approach would require. The officials said that developing a watershed-based model to estimate nonpoint source pollution costs could cost about $750,000, compared with the $25,000 it costs to update and run the existing model. Research activities underway at other agencies, however, could facilitate EPA’s effort. Researchers at USDA’s Natural Resources Conservation Service have developed a nationwide, watershed-based methodology to assist decisionmakers in identifying priority watersheds for water quality protection from agricultural nonpoint source pollution. Using primarily the National Resources Inventory database and factors such as precipitation and agricultural chemical use, the researchers assessed the potential for these contaminants to leach into an underlying aquifer or run off into a body of water. Those watersheds having a high potential for a combination of pollution sources (e.g., chemical and soil loss) were identified as candidates for conservation programs to reduce nonpoint source runoff. Although the methodology does not assess whether the runoff enters a body of water and impairs water quality, it goes further than EPA’s current methodology toward linking sources of nonpoint source runoff and water quality impairments by identifying those watersheds that are most vulnerable to water pollution. In addition, the research suggests that a more cost-effective reduction in nonpoint source pollution could be achieved by targeting public investments on conservation measures in specific high-priority watersheds. Researchers at the U.S. Geological Survey (USGS) developed a different watershed-based approach. Their methodology statistically correlates water quality conditions to possible sources—point sources, applied fertilizers, livestock waste, runoff from nonagricultural land, and atmospheric deposition of nitrogen—and watershed attributes that affect contaminant transport (such as soil permeability and precipitation). This approach allows for prediction of contaminant concentrations at specific locations, as well as, characterizing regional water quality. USGS has used its approach to model nitrogen and phosphorus transport, and is finalizing results of an application which assessed the most cost-effective approach to applying controls to point and nonpoint sources to reduce nitrogen and phosphorus loadings in coastal areas. The USGS model could be useful for EPA’s purposes in that it would allow for the development of nonpoint source control cost estimates that focus on sources that are linked to water quality problems. Our contacts with researchers at USDA and USGS suggest that a watershed-based methodology would likely yield a more realistic estimate of nonpoint source control costs than one based on EPA’s current methodology. An official at USDA asserted that EPA’s efforts could benefit from watershed-based modeling research at USDA and other agencies. EPA officials indicated that they were not aware of the efforts at USDA and USGS but in discussions with us, agreed that it would be useful to learn more about these efforts. As noted in this chapter, a number of improvements can and should be made to EPA’s methodology for estimating the cost of controlling nonpoint source pollution in order to increase its comprehensiveness and to ensure that its process and results can be reviewed and understood. In addition, EPA’s consideration of another cost-estimation strategy that relies on a “watershed-based approach” has the potential to provide a more realistic cost estimate. Such an approach also has the potential to serve as a tool for identifying and prioritizing watersheds most likely to have water quality problems and potentially where the most cost-effective use of resources could be applied to reduce nonpoint source pollution. It is unclear whether EPA will pursue this approach in its next Needs Survey report, given the resources that would be required to do so. However, working with USDA and USGS could provide lessons learned, data sources, and modeling approaches, that would help shift EPA’s nonpoint source pollution control cost-estimation methodology in this constructive direction. To improve EPA’s approach toward estimating the cost of controlling nonpoint source pollution, we recommend that the Administrator of EPA direct the Office of Water to address key limitations in its approach and presentation of the methodology and its results by (1) including the costs of operating and maintaining best management practices, (2) assessing and disclosing the range of uncertainty associated with its control cost estimate, and (3) more fully documenting its cost estimation methodology and work with researchers at USDA and USGS to obtain lessons learned, data sources, and modeling approaches to help advance EPA’s own efforts to develop a watershed-based cost-estimation approach. EPA acknowledged that our assessment of the cost-estimation methodology is factually accurate, but disagreed with the recommendation in our draft that operation and maintenance costs for nonpoint source pollution be included in the next Needs Survey report to be issued in 2000. Specifically, the agency said that including this information would represent a major change in the scope of the report as required by section 516(b)(1)(B) of the Clean Water Act, which requires EPA to report on the costs of construction of all publicly owned treatment works in each of the states. For this reason, EPA officials said that reporting operating and maintenance information might be more appropriate in another report. Our concern was that the information be developed, rather than with the specific vehicle in which it would be reported. Therefore, we have modified the recommendation to emphasize that this information be developed, regardless of its reporting mechanism. EPA did not respond directly to the other recommendations that the agency assess and disclose the range of uncertainty associated with its control cost estimate, more fully document its cost estimation methodology, and work with researchers at USDA and USGS to advance its efforts to develop a watershed-based cost estimation approach. On the last of these recommendations, EPA asked us to clarify that it was not considering the watershed-based approach as a replacement for existing cost-estimation activities that it believes must continue for a number of reasons, but rather as a supplement to these activities. We added language to clarify EPA’s position on this matter. USDA’s Agricultural Research Service shares the concern expressed in our draft report that EPA’s estimated cost of controlling nonpoint sources of pollution does not include the operational costs associated with the use of best management practices. The Service is also supportive of the recommendation to use a watershed-based approach in estimating the cost of controlling nonpoint source pollution, noting agency research has established that the protection provided by natural barriers, such as riparian zones, is watershed specific. In addition, the Service pointed out that the effectiveness of using certain practices to control the movement of potential contaminants can be markedly affected by site-specific conditions within watersheds. USGS’ comments elaborated on our findings regarding the issue of uncertainty in nonpoint source control cost estimates providing specific examples of possible uncertainty. USGS said that uncertainty exists for many contaminants because they have not yet been tested for controls and, therefore, control strategies for addressing them have not been developed. In addition, USGS pointed out that some best management practices might be effective at controlling only certain contaminants and, therefore, some areas will require multiple controls to address nonpoint source pollution. Last, USGS noted that the implementation of some controls may cause new pollution problems that will also have to be addressed. USGS also said that it would be pleased to work with EPA and USDA to provide insights regarding watershed-based modeling of nonpoint source contamination and estimating costs for mitigating contamination. Federal agencies manage, authorize, or issue permits or licenses for, a variety of activities that provide public benefit but may also contribute to nonpoint source pollution. Federal and state officials that we contacted identified five of these activities as those with the most potential to contribute significantly to nonpoint source pollution: silviculture (specifically timber harvesting and associated roads), grazing, drainage from abandoned mines, recreation, and hydromodification. Several other activities managed or authorized by federal agencies were identified by state and federal officials as contributing to nonpoint source pollution in some watersheds, such as farming and irrigation, but were not highlighted as significant concerns. The federal government owns about 20 percent of the land area in the lower 48 states, and this land is concentrated in the west. As a result, many western watersheds are dominated by federally owned land and the associated federally managed or authorized activities that may cause nonpoint source pollution. According to the nonpoint source program managers that we interviewed in five Western States, many water quality problems in their states result from one or more of these federal activities. In pursuit of widely varying missions and legislative requirements, federal agencies manage, authorize, or issue permits or licenses for, a variety of activities that provide public benefit such as recreation, timber harvesting, and livestock grazing. For example, the Forest Service (USFS) and the Bureau of Land Management (BLM) provide for timber harvesting and livestock grazing on their lands as well as for recreational opportunities. Figure 4.1 identifies which federal agencies included in our review manage or authorize the activities identified by state and federal officials as being the nonpoint sources of most concern. Silviculture includes the management and care of forests, such as timber harvesting, road construction, replanting, and chemical treatments. As figure 4.2 shows, the Forest Service owns most of the federal timberland suitable for timber harvesting. According to the federal and state officials we interviewed, the majority of nonpoint source pollution resulting from silvicultural activity results from roads constructed for timber removal, although timber harvesting and the transportation of logs from a harvest area can also contribute significantly to water pollution. Other silvicultural practices such as site preparation, prescribed burning, and chemical applications were not cited by state or federal officials as significant sources of nonpoint pollution overall. Timber harvesting can be a significant source of nonpoint pollution. However, USFS officials emphasized that the timber harvest itself is typically a less significant cause of nonpoint source pollution than associated activities required to transport logs from the harvest site, such as hauling logs along trails known as skid trails. The movement of logs from the harvest site typically involves the use of heavy equipment, such as tractors, to haul logs along skid trails to landings where they can be loaded onto trucks. The use of heavy equipment and skidding of logs compacts the soil and can severely disturb land surfaces. Rain falling on these areas tends to run off the surface, allowing sediment to flow more easily into streams. USFS is the dominant federal agency involved in timber harvesting. However, timber harvesting on USFS lands has been declining significantly in the past decade, from 12.7 billion board feet in fiscal year 1987 to 3.3 billion board feet in fiscal year 1998, a decline of over 70 percent.Accordingly, associated activities such as the use of skid trails have also declined. BLM is the only other agency with a significant level of timber harvesting with 239 million board feet in fiscal year 1997. The amount of nonpoint source pollution generated by timber operations varies considerably depending on (1) site-specific conditions, such as the stability of the soil and the slope of the land where the harvest occurs, and (2) management decisions, such as the choice of log transport method, which is a key determinant of the amount of ground disturbance that will be caused by the operation. Forest Service research shows that nonpoint pollution generally results from a timber harvest when there is a large amount of surface disturbance on steep slopes or when riparian vegetation is removed or modified. For example, clear-cutting on steep slopes in the Pacific Northwest has led to significant increases in the number of landslides that deposit large amounts of sediment. In addition, the manager of the nonpoint source unit in Oregon told us that past timber harvesting operations in the state have resulted in removal of riparian vegetation and consequent reduction of streamside shade, which causes elevated stream temperatures that are considered harmful to some fish species. Recognizing the need to reduce soil erosion and other nonpoint source impacts resulting from silvicultural activities, the Forest Service and BLM have moved away from the use of clear-cutting as a harvest method. For example, clear-cutting on Forest Service lands has declined significantly in the past 5 years, from 132,674 acres in fiscal year 1993 to 45,854 acres in fiscal year 1997, a decline of about 65 percent. In addition, Forest Service and BLM timber contracts are to include requirements to implement best management practices, appropriate to the conditions of the site being harvested, to reduce water quality impacts. For example, a contract may require that skid trails and landings be designed to minimize erosion or that the lifting of logs from the harvest area occur via helicopter when slopes are steep. Forest Service officials were confident that existing requirements regarding management practices would, if followed, reduce nonpoint source pollution. However, the Forest Service does not systematically aggregate data regarding the implementation of the requirements. Harvesting timber often requires the construction of numerous miles of forest roads to move heavy equipment into the harvest areas and up and down hillsides. The Forest Service has inventoried about 373,000 miles of roads on Forest Service lands. BLM has inventoried almost 75,000 miles of roads on its lands, though the majority of BLM roads were constructed for commercial use other than forest products such as for oil and gas, mineral, and grazing activities. About 14,000 miles of BLM roads have been constructed in Oregon and Washington where 85 percent of BLM-authorized timber harvesting occurs. Forest Service and BLM officials noted that few new roads have been constructed in recent years, and little new construction is planned. The officials also pointed out that there are many other uses for which forest roads stay open after a harvest is completed, and the majority of traffic on forest roads are from these other uses. Officials from both the Forest Service and BLM told us that, overall, roads are among the two most serious threats to water quality on lands they manage. According to Forest Service officials and scientific literature, roads are considered to be the major source of erosion from forested lands, contributing up to 90 percent of the total sediment production from forestry operations. Historically, forest road construction standards were not focused on reducing the potential for erosion and associated water quality impacts. Poorly designed and sited roads can change natural stream flowpaths, which leads to incision, or cutting away, of previously unchanneled portions of the landscape and increased erosion. Roads also concentrate stormwater runoff on road surfaces of exposed and often-compacted soil, and may channel flow into adjacent ditches, where eroded sediment from hillsides and roadbeds can be more easily transported to streams. We observed such channel incision and erosion on Forest Service land in Arizona. (See fig. 4.3.) Sediment from roads can contribute to water quality problems. For example, we recently reported that forest roads were one of several sources of sediment that led to exceedances of turbidity in drinking water and the shut down of several drinking water systems during an unusually heavy storm in western Oregon. Scientific literature shows that aquatic habitat and fish populations can also be adversely affected. Mass erosion resulting from roads can lead to the filling of stream pools, which causes them to support fewer fish and may increase fish mortality. In addition, fine sediment can fill crevices in stream gravel that would otherwise serve to protect juvenile fish and provide spawning grounds. Forest Service and BLM officials told us that they have attempted to begin minimizing impacts from roads—within current budget constraints and priorities. For example, the Forest Service and BLM have formal management guidance specifying several engineering practices that may reduce the impacts of roads on water quality. These practices include halting timber operations in wet weather; constructing drainage ditches, culverts, and other structures for controlling erosion; inspecting and maintaining roads during and after winter storms; and creating stream-side buffers to minimize water quality impacts. Figure 4.4 shows a Forest Service road improvement project installed to change the way the road diverted stormwater runoff in order to reduce stream velocities and subsequent erosion. In addition, the Forest Service recently began developing a new roads policy. The three key objectives of this policy are to: (1) provide Forest Service managers with new scientific and analytical tools with which to make better decisions about when, where, and if new roads should be constructed; (2) decommission unnecessary and unused roads, as well as unplanned or unauthorized roads; and (3) improve forest roads where appropriate to respond to changing demands, local communities’ access needs, and the growing recreational use of Forest Service lands. One state official we interviewed expressed concern that the Forest Service will face significant challenges in closing roads, since signage and gates used to close them can be ignored by people wanting to use the roads for recreational purposes. The Forest Service already has significant problems with unauthorized vehicle use of forests. Repeated use has created over 60,000 miles of unauthorized roads throughout the National Forest System, in addition to the 373,000 miles of roads previously mentioned. Figure 4.5 shows examples of unauthorized roads, which can also accelerate erosion and can contribute sediment to nearby waterbodies. As figure 4.6 shows, BLM and USFS own most of the federal land available for grazing. Officials from both BLM and the Forest Service said that livestock grazing is among the two most significant contributors of nonpoint source pollution on lands they manage. The state officials we talked with also expressed concerns regarding nonpoint pollution resulting from grazing on public lands. In Oregon, for example, the manager of the nonpoint source unit told us that federally authorized grazing contributes to the degradation of about 30 percent of all impaired waters in the state. Grazing can result in nonpoint pollution in several ways. Continuous grazing can lead to a reduction of vegetation that would otherwise serve to protect soil surfaces from the erosive impact of rain. Livestock may also strip vegetation from bushes and shrubs, de-stabilizing root structures and loosening soils, making the soils more vulnerable to runoff during a major storm event. Grazing in riparian areas, which are located in and alongside streams, can lead to a loss of vegetation that would otherwise serve to filter sediment in the streamflow, stabilize streambanks, and provide shade that moderates stream temperatures to levels tolerable for aquatic species. Continuous grazing also leads to trampling of surfaces, causing soil compaction. This reduces rainfall infiltration and in turn leads to increased runoff. Trampling can also cause streambanks to slump and erode, resulting in direct deposit of streamside soil into waterbodies. In addition, direct deposits of manure can occur when animals graze near waterbodies and can lead to fecal coliform and pathogen contamination. Figure 4.7 shows a streambank that is beginning to erode due to loss of vegetation through grazing and a healthy riparian area where grazing has been excluded. Livestock grazing is not the only source of grazing impacts, however. Wildlife, such as elk and deer, graze federal lands and can cause significant impacts such as loss of vegetation and fecal coliform contamination in some places. According to Arizona officials, uncontrolled populations of wildlife are among the state’s most serious threats to water quality. BLM officials acknowledge that grazing causes damage to the riparian stream environment. They note that almost three-quarters of the agency’s nearly 40,000 miles of riparian stream environment in the lower 48 states have been assessed to determine ecological condition. Of these assessed stream miles, BLM reported that 14 percent, or almost 4,000 miles, are “non-functional” or do not provide adequate vegetation to slow streamflows that would otherwise cause significant erosion. Another 45 percent of the stream miles are classified as “functional—at risk” and most are declining or have no apparent condition trend. BLM officials added, however, that the precise impact of grazing on the riparian environment is difficult to isolate from that of other sources. State and federal officials told us that while impacts from current grazing are significant in some areas, the impacts vary considerably depending on several factors, including soil and vegetation type in forage areas, the duration and intensity of grazing, and management practices implemented to mitigate nonpoint source impacts. Proper management of grazing lands can often reduce or minimize nonpoint pollution from grazing. However, the officials we talked with said that federal efforts to actively manage grazing are often limited by insufficient staff and resources. In addition to the effects of present-day grazing, many watersheds throughout the west have not fully recovered from the heavy grazing that occurred on public lands around the turn of the century. Officials from California, Colorado, and Oregon said that past heavy grazing such as in the late 1800s in each of these states has led to long-term dramatic effects in many watersheds. Abandoned mines are categorized as those abandoned or left inadequately restored. Federal agencies have identified almost 100,000 abandoned mine sites on federal land across the country, though federal inventories do not use consistent definitions of “site.” Because of varying definitions, a site may range in size from a small exploratory hole, or single shaft, to a large area encompassing numerous shafts and large open pits. (See fig. 4.8.) Abandoned mines on federal land are primarily hardrock mines and occur almost exclusively on lands managed by BLM and the Forest Service. To date, 70,000 abandoned mines have been inventoried on BLM lands, 39,000 on Forest Service lands, 2,500 on National Park Service lands, and 240 on National Wildlife Refuges. Mining disturbs rock surfaces and generates piles of waste rock and mine tailings, which exposes minerals in the rock to air and water, accelerating natural rates of oxidation. The oxidation of sulfide minerals, such as pyrite (iron sulfide), generates strong acids, which can drain or run off with stormwater into streams. Acidic conditions in streams can have severe consequences for aquatic life by interfering with biological processes such as reproduction. For example, a Park Service study found that many aquatic species that once existed in major portions of the Cumberland River in Kentucky now exist only as isolated remnant populations possibly because of acid drainage from abandoned coal mines. Acids from mine drainage can also dissolve metals, such as copper, zinc, manganese, and aluminum, that can be carried into surface waters in toxic concentrations. High concentrations of metals in surface waters can threaten ecological health. According to a Forest Service official, a few livestock fatalities have occurred as a result of ingesting selenium while grazing in areas contaminated by drainage from abandoned mines on National Forest lands in Idaho. In addition, plant growth has been severely disrupted by acid mine drainage from the abandoned McLaren and Glengary gold and copper mines on the Custer and Gallatin National Forests in Montana. This loss of natural vegetation leaves soils vulnerable to the erosive impact of rain, which can increase the amount of sediment running off into waterbodies. Officials we interviewed from each of the five states identified abandoned mines as significant contributors to nonpoint source pollution. In Colorado, for example, the manager of the nonpoint source unit estimated that almost 50 percent of water impairments in the state are adversely affected by acid drainage from abandoned mines. Many of these mines occur on federal lands. Several federal agencies have programs to reclaim abandoned mine sites and thereby reduce nonpoint source pollution impacts from acid mine drainage. For example, in 1997, the Forest Service obligated about $10 million for hazardous waste projects that were targeted mostly to abandoned mine land reclamation. In 1998, BLM obligated about $3 million toward abandoned mine reclamation in Colorado, Montana, and Utah. Officials from four of the states that we contacted as well as Forest Service, Park Service, and the Fish and Wildlife Service expressed concerns regarding nonpoint source pollution from recreation. Recreational use of public lands and waters is currently widespread and is increasing steadily. For example, in the past 10 years, recreational use of the National Forests has increased 40 percent. Figure 4.9 shows recreational use of federal lands in fiscal year 1997. Fis h & W ildlife Many recreational activities can result in direct deposits of pollutants into waterbodies such as human and pet waste. This waste may contain disease-producing bacteria and viruses and poses a potential health risk for people exposed to the water. Arizona and Oregon state officials noted that river recreation, such as tubing, kayaking, and swimming and unauthorized dumping of sewage from boats and motor homes, can cause high levels of fecal coliform in surface water. Oil and gas spills from motor boats and other recreational vehicles are also possible sources of nonpoint pollution. Use of vehicles on public lands and roads can also cause significant erosion. As noted previously, forest roads are often left open after harvesting for other purposes such as recreational use. Forest Service research has shown that increased vehicle use causes an increase in erosion from forest roads. An estimated 1.7 million vehicles associated with recreational activities travel forest roads each day, over 10 times more than in 1950. In addition, land disturbances caused by the use of off-road vehicles can also lead to increased erosion. One BLM official told us that in extreme cases, off-road vehicle use through stream environments can cause road-beds to divert channel flows from streams onto the road surface. State officials told us that recreational activities tend to cause water quality impairments when the activity is highly concentrated in a given area. For example, during the summer 1998, 25,000 people assembled in a small area of Apache-Sitgreaves National Forest in Arizona, causing severe land disturbances and increased erosion, as well as unusually high fecal coliform levels in otherwise-pristine forest streams. In addition, state officials said that concentrations of campers along streambanks can lead to the destruction of vegetation in riparian areas, in turn causing sediment and temperature impacts to waterbodies. With few exceptions, federal agencies do not have specific guidance or policies for dealing with recreation and associated water quality impacts. The Park Service has a policy dealing with recreational boating and marinas and associated nonpoint sources. Some agencies perform assessments and develop solutions on a case-by-case basis once problems are identified. For example, the Park Service has recently closed some parks to off-road vehicle and jet ski use to reduce water quality problems. Likewise, BLM has designated specific off-road vehicle use areas in attempts to contain the damaging activity to small areas. However, a Forest Service research scientist told us that little federal research is available on the water quality impacts from recreation to help guide such decisions or develop strategies for dealing with recreational impacts. EPA’s National Water Quality Inventory: 1996 Report to Congress identifies hydromodification activities, such as channelization and the construction and operation of dams, as contributing to the degradation of 14 percent of the nation’s impaired river and stream miles. Three of the five states we contacted identified hydromodification as a significant concern, and each of the federal agencies that manage and authorize the activities—the Bureau of Reclamation, the Army Corps of Engineers, and the Federal Energy Regulatory Commission (FERC)—acknowledged that hydromodification may contribute to nonpoint source pollution in some areas. Hydromodification projects often provide important public benefits, such as providing water to arid regions, electric power generation, or flood protection. For example, in 1992, the Bureau estimated cumulative flood control benefits of $8.4 billion in prevented damages from its projects during the period 1950 through 1992. However, state officials we interviewed noted that existing dams and channelization projects also contribute significantly to water quality impairments and can limit the extent to which streams recover from water quality degradation. EPA defines channelization as river and stream channel engineering undertaken for flood control, navigation, drainage improvement, or clearing away of debris. It also includes the reduction of channel migration potential—such as straightening, widening, deepening, or relocating existing channels. Levees, another form of channelization, are embankments or shaped mounds meant for flood control or hurricane protection. The Corps manages about 8,500 miles of levees nationwide to protect floodplain property without modifying the channel itself but does not maintain an inventory of the total number of channelization projects. Managed predominantly by the Corps, federal channelization projects can contribute to nonpoint source pollution in several ways. For example, channel clearing operations remove vegetation that would otherwise act as natural barriers that slow water velocities and filter sediment and other pollutants. As a result, these operations can cause increased downstream erosion and faster rates of pollutant transport. Channel enlargement projects include activities such as increasing channel depths while retaining the original bank slopes. This may cause stream banks to slump and erode, resulting in increased loadings of sediment. Levees, when located close to streambanks, can prevent the movement of instream waters into adjacent wetlands and riparian areas. This can result in increased in-stream pollutant loadings because the natural filtration that would normally occur is prevented. Channelization projects have caused significant declines in the quality of some watersheds. For example, state officials in Oregon reported that nonpoint source pollution problems caused by channelization projects conducted for flood control from the 1920s through the 1950s have contributed significantly to the decline of watershed functioning in the state. The Corps and the Bureau of Reclamation operate over 900 dams and reservoirs for multiple purposes such as municipal and industrial water supply, flood control, recreation, and irrigation and operate 133 hydroelectric facilities for power generation. The Bureau and the Corps are the two largest suppliers of hydroelectric power in the nation, providing about 42 billion and 75 billion kilowatt hours, respectively, and together account for almost 40 percent of total hydroelectric kilowatt hours produced. In addition, the Federal Energy Regulatory Commission regulates about 1,750 nonfederal hydropower facilities which generate about 154.5 billion kilowatt hours annually. Dam and reservoir projects vary in size, type, and operating purpose(s) and result in water quality impacts in many different ways. Some impacts are specific to a particular type or purpose of a project, while others may occur regardless of the project type or purpose. For example, in some cases, deep reservoirs stratify by temperature, resulting in a cold, deep layer that may result in low dissolved oxygen and high concentrations of some dissolved elements such as iron, manganese, sulfur, and nitrogen. Releases from deep reservoirs can have significant temperature impacts on receiving waters; federal officials said that aquatic species can be adversely affected by these conditions if dam releases draw water primarily from this lower layer. In addition, dams and reservoirs also cause significant habitat modification problems for migrating aquatic species. For example, dams can be a factor contributing to decreasing numbers in salmon populations, some of which in the Northwest are on the verge of being endangered or extinct. Because reservoirs trap and accumulate sediment, waters released from reservoirs are often low in sediment, leaving them capable of carrying more sediment (i.e., increasing erosion) from the banks and beds of the stream immediately downstream from the reservoir. Peaking operations of dams may result in accelerated downstream erosion with the resulting increased flow rates. However, in other instances, dam releases may contain high levels of sediment, which can lead to accumulation of sediment downstream as it settles out. Bureau officials told us that downstream movement of suspended sediment during extreme reservoir drawdown periods has been documented at several reservoirs, including Island Park, American Falls, and Black Canyon in Idaho, and Thief Valley in Oregon. The impact of individual dam and reservoir projects varies significantly, depending on the type and purpose of the project, the streamflow and sediment characteristics of the parent streams, and the management practices applied at a given site. Bureau and Corps officials told us that best management practices can be used to minimize the avoidable effects of dams on water quality. For example, older dams can be retrofitted with systems that mix water from different depths before release to minimize the thermal and dissolved oxygen impacts from stratified, deep reservoirs. FERC also plays a role in federal nonpoint pollution by issuing licenses to nonfederal entities to construct and/or operate a hydropower project. As required by the National Environmental Policy Act, FERC must (1) prepare an environmental assessment or an environmental impact statement for any license or relicensing application and (2) describe the effects of the project on several environmental factors, including water quality. In reviewing licensing or re-licensing applications, FERC must weigh environmental impacts equally with other purposes of the project. FERC can include provisions in licenses to mitigate impacts such as requirements to conduct regular water quality monitoring, to construct fish ladders to facilitate migration, or to prepare a plan to control erosion. Several other activities managed or authorized by federal agencies were identified by state and federal officials as contributing to nonpoint source pollution in some watersheds but were not cited as significant sources of overall concern. These activities include a number of silvicultural activities other than timber harvesting and forest roads, farming, irrigation, federal-aid highways and roads, and military training. Silvicultural practices other than timber harvesting and forest roads primarily include site preparation, prescribed burning, and applications of chemicals such as herbicides. While no state officials we interviewed identified the practices as concerns or cited them as causes of impaired waters in their states, Forest Service officials told us that they can contribute to problems in some cases. Site preparation includes activities to help tree stands regenerate. Stands are either left to regenerate on their own or are planted. Planting can involve mechanical site preparation techniques that involves the use of heavy equipment, such as tractors, to rake the soil. This can severely disturb land surfaces and cause erosion. However, according to Forest Service officials, use of mechanical site preparation methods is declining, as the Service increasingly relies on natural regeneration. Prescribed burning and chemical applications, which are used to maintain forest health, can also contribute to nonpoint pollution if not properly managed. For example, when a prescribed burn gets out of control, the resulting intense fire may completely burn the forest floor, exposing mineral soil and accelerating erosion in steep terrain. Applications of chemicals such as herbicides may pose a risk to water quality if applied without adequate buffers or due to drift during aerial applications. However, each of these activities are rare on federal lands. Forest Service dedicated about 1.2 million acres to prescribed burn management (less than 2 percent of total timberland) and chemically treated about 300,000 acres in fiscal year 1997. While farming-related activity is cited as the source of a large portion of the nation’s nonpoint source pollution, it is a minor contributor on federal lands. The Fish and Wildlife Service, Park Service, and the Department of Defense reported authorizing farming activity on small portions of the lands they manage. For example, farming activity is permitted by the Fish and Wildlife Service on 166,000 acres within the National Wildlife Refuge System, which constitutes less than 1 percent of the total acreage in the system. Several state officials expressed some concern regarding nonpoint source pollution resulting from federally authorized farming activity; however, they told us that impacts are not a major concern since the activity is relatively rare, especially in comparison to private farming. The Bureau and the Corps both provide water resources for private farming, primarily through the construction and operation of canals, laterals, and drains. Reclamation operates about 15,900 miles of canals, 37,000 miles of laterals, and 17,000 miles of drains to convey water for irrigation and flood control. In 1992, the Bureau provided irrigation water to private farms covering more than 9.2 million acres of western land. According to Bureau officials, return flows and runoff from irrigated lands may transport nonpoint source pollutants such as sediment, nutrients, metals, and pathogens into waterbodies. Irrigation projects also contribute to salinity problems in western waters. Corps officials told us that the agency does not maintain a centralized inventory of irrigation activity because it is a small part of the Corps’ mission but noted that nonpoint pollution impacts resulting from their irrigation activity are likely to be minor. Bureau officials told us that some Bureau-managed agricultural drains are significant sources of pollution to water-quality-limited waters throughout the west, including the Snake, Boise, Payette, and Yakima Rivers. Officials from the Fish and Wildlife Service told us that nonpoint pollution impacts due to selenium drainage from irrigation return flows are among the most serious and pervasive irrigation impacts occurring on lands within the National Wildlife Refuge System. In some areas, contaminated drainwater has been linked to waterfowl deaths, birth defects, and reproductive failures. Interior has had an irrigation water quality program since 1985, which has largely focused on identifying and correcting contamination problems. Roads, highways, and bridges funded with federal dollars may also result in nonpoint source pollution. Federal aid is provided to state and local governments to construct and maintain roads and highways. Almost 1 million miles of highways and roads have been constructed and/or maintained with the aid of federal funds in the United States. While road construction can be a significant source of water pollution, most projects are regulated by EPA’s stormwater permit requirements for construction sites and are therefore not discussed in this report. However, once constructed, highway operations result in nonpoint pollution via the process of stormwater runoff which carries with it any pollutants that have accumulated on road surfaces such as oil, grease, and de-icing compounds. The Department of Transportation has compiled research that provides guidance to state and local governments for mitigating water quality impacts from roads, highways, and bridges. Best management practices to control this type of runoff include structures such as filters, trenches, and ponds designed to trap nonpoint source pollutants, minimizing the amount that actually reaches waterways. However, because road and highway projects are decentralized, mainly carried out by state and local governments, the Department does not have nationwide data on the implementation of these management practices (although implementation of such activities is typically a requirement for receiving federal aid). The major sources of nonpoint pollution identified by Defense officials are associated with maneuver bases and training areas, especially from the use of heavy vehicles and machinery such as tanks, artillery pieces, and amphibious assault vehicles, as well as from large caliber firing ranges. These activities can result in significant land disturbances and subsequent erosion following large storms. Service officials we talked with said that impacts do occur, and in some cases, water quality standards have been violated. For example, Marine Corps staff have observed severely eroded roads and vehicle crossings over streams at Camp Lejeune in North Carolina and Quantico in Virginia. In addition, Army officials told us that erosion is a serious problem for many Army maneuver bases located on abandoned or degraded agricultural land where soils are highly erodible, especially on eastern bases such as Fort Bragg, North Carolina. Service officials said that minimizing nonpoint source impacts is in their best interest in order to avoid violations of state water quality standards and to enable them to continue their critical training missions. For example, while all of the military services expressed some concern with metals leaching from ammunition used on firing ranges, lead in stormwater runoff has rarely been documented. In response to a contaminated runoff incident, the Marine Corps built traps to collect bullets to avoid any further leaching, even though water quality had not been impaired. Collected bullets can then be recycled, which allows for recovery of the cost of the traps. In addition, as discussed in chapter 2, some nonpoint sources are addressed via Defense’s stormwater permit activities by diverting nonpoint runoff and treating it as a point source. The predominance of federal land ownership in many western watersheds suggests a potentially significant federal contribution to nonpoint source pollution in those areas. Overall, federal lands account for about 20 percent of the total land surface area in the lower 48 states. Most of this land is in 11 Western States—Arizona, California, Colorado, Idaho, Montana, Nevada, New Mexico, Oregon, Utah, Washington, and Wyoming. As indicated in figure 4.10, tracts of federal land can encompass large portions of many watersheds (shaded areas represent watersheds with greater than 50 percent of the land owned by the federal government). Specifically, federal agencies own at least one-half of the land area in about 60 percent of the watersheds in the above 11 states and 22 percent nationwide. The nonpoint source program managers that we contacted in five of the Western States reported many water quality problems resulting from one or more of the federal activities discussed in this chapter. In Oregon, for example, the manager of the nonpoint source program told us that nonpoint source pollution from federal activities is the primary source of impairment of 50 to 60 percent of the waterbodies the state reported as impaired. In Arizona, the nonpoint program manager said that federal activities are the primary source of impairment to almost 50 percent of all impaired waters in the state. Several state officials pointed out, however, that not all water quality impacts are due to current federal activities citing past timber and grazing practices, in particular, as sources of continuing nonpoint pollution in their states. Even in watersheds where there is not significant federal land ownership or a significant federal contribution to nonpoint source pollution, control of nonpoint source pollution by federal agencies may promote strong federal stewardship of lands held in the public trust and encourage strong stewardship by private landholders. EPA officials in the interagency Chesapeake Bay Program told us that even though federal agencies own just a small percent of the land in the Bay watershed, they have enjoyed broad federal involvement in restoration activities, which has helped to promote federal stewardship of public lands and set an example for private landholders. In November 1998, EPA and its federal partners announced a new commitment to this stewardship, recognizing the important role the agencies can play in the Bay watershed. State environmental efforts can benefit from such stewardship as the manager of the nonpoint source program in Oregon pointed out to us. He said that weak federal commitment to addressing nonpoint pollution discourages private stewardship. On the other hand, he noted that strong federal stewardship of public lands can encourage private stewardship by demonstrating commitment and accomplishments. In addition, each of the five state officials we contacted noted that they had good working relationships with several of the federal agencies discussed in this report and, in these instances, were working with their federal counterparts to address water quality impacts. The Clean Water Action Plan acknowledges the importance of the federal contribution to nonpoint source pollution, outlining several key action items federal agencies are to implement in order to better protect water resources on federal land. Specifically, USDA and Interior are to lead the development of a unified federal policy to enhance watershed management on federal lands to provide for the protection of water quality and health of aquatic systems. In addition, federal agencies are to ensure that environmental safeguards and appropriate water quality provisions are included in permits, licenses, and other agreements used to allow activities to occur on their lands. The Department of the Interior said that the draft report appeared to equate the magnitude of nonpoint source pollution to the amount of federally managed land involved. The Forest Service expressed a similar concern, noting that simply because a significant portion of the land base in many Western States is federally managed, it does not necessarily follow that these lands contribute a significant proportion of the nonpoint source pollution in these states. The Service suggests characterizing the federal contributions as “potential” rather than “actual.” As discussed in chapter 4, information obtained from the states we contacted does in fact show that a significant proportion of water quality problems can be attributed, at least in part, to activities occurring on federal land. However, we acknowledge the variability in this relationship, noting that the degree of pollution in specific areas may depend on site-specific characteristics such as geographic and hydrologic conditions, the type of activities occurring and intensity of use, and management practices applied to minimize impacts. Accordingly, as suggested by the Forest Service, we modified language in chapter 4 where appropriate to characterize the association between a large portion of federally owned land to contributing a significant amount of nonpoint pollution as potential rather than actual. On a related issue, USDA’s Natural Resources Conservation Service said that chapter 4 leaves the impression that all grazing and timber activities cause nonpoint source pollution and suggested that the activities in this chapter should be characterized as contributing to nonpoint source pollution only if not properly managed. We agree that water quality impacts can be reduced, but not necessarily eliminated, by the use of appropriate management practices and discuss some of these practices in each of the activity sections. However, such practices may not always be in place. Moreover, as pointed out by federal and state officials, as well as by Forest Service research—and included in our report—water quality impacts continue to result from past management practices, such as the type of heavy grazing that occurred in the late 1800s and certain timber harvesting practices. FERC acknowledged that nonpoint source pollution-related impacts can result from FERC-licensed hydropower projects, but cautioned that in characterizing these impacts, the report (1) carefully distinguish between the effects of hydropower versus other forms of hydromodification; (2) distinguish between FERC-licensed projects and federally managed projects; and (3) recognize that hydropower is not an original source of some of the impacts identified, but rather a factor that can amplify the effects of other sources that contribute nonpoint pollution. Regarding the first two points, while our draft did in fact recognize the distinctions identified by FERC, we made additional changes to add further clarification. Regarding the third point, we agree that, in some instances, hydropower is not technically the source of the pollution, although, as FERC points out, it may still be a contributor. In other instances, however (such as situations where changes in temperature or dissolved oxygen levels or increased downstream erosion result directly from a project’s operations), we continue to believe that it is more appropriate to characterize the project as an original source of the pollution. In addition to the Environmental Protection Agency (EPA) programs discussed in this report that primarily address nonpoint source pollution, a few other programs authorized by the Clean Water Act address nonpoint source pollution but to a lesser extent. This appendix provides an overall description, funding levels, and allocation methods for these remaining programs. Section 104(b)(3): National Wetlands Program ($620,000 obligated for nonpoint activities out of $70 million appropriated to the program for fiscal years 1994 though 1998.) Overall Objective: The program’s overall objective is to protect, manage, and restore the nation’s wetland resources consistent with EPA’s Clean Water Act responsibilities and to assist state, local, and tribal governments in developing effective wetland programs. According to EPA, a program objective is also to encourage and enable others to act effectively in protecting and restoring the nation’s wetlands and associated ecosystems, including shallow open waters and free-flowing streams. EPA’s activities are predominantly establishing national standards and assisting others in meeting those standards. Allocation Method: EPA uses a competitive process to allocate program funds to state, local, and tribal governments and to interstate and intertribal entities. EPA headquarters releases yearly guidance that describes the grant program and establishes program direction and priorities. EPA’s regional offices review all proposals and select projects that best help develop or refine wetland protection, management, or restoration programs. Section 106: EPA’s Water Pollution Control, State and Interstate Program Support Program ($2.3 million obligated for nonpoint activities out of $418.3 million appropriated to the program for fiscal years 1994 through 1998.) Overall Objective: This program was created to assist states, territories, interstate agencies, and qualified Indian tribes in establishing and maintaining adequate measures for preventing and controlling surface and ground water pollution. Grant funds provide broad support for the prevention and abatement of surface and ground water pollution from point and nonpoint sources through activities such as water quality planning, standard setting, permitting sources, monitoring, and assessments and enforcement. Allocation Method: EPA uses a formula to allocate program funds to states, interstate agencies, and tribes. Developed in 1974, the formula is primarily based on state population and four categories of point source pollution (municipal dischargers, industrial dischargers, feedlots of 1,000 head or greater, and power plants). EPA has proposed a revision of the formula to be more reflective of current water quality impairment. Section 314: Clean Lakes Program ($950,000 obligated for nonpoint activities out of $5.06 million appropriated to the program for fiscal years 1994 through 1998.) Overall Objective: The overall objective of this program is to provide financial and technical assistance to states to restore and protect publicly owned lakes and reservoirs. The program has evolved considerably over time. The program’s early focus was on research and the development of lake restoration techniques and evaluation of lake conditions. In the 1980s, attention was shifted to identifying sources of pollution and developing plans to deal with water quality problems. EPA has not requested funds for this program in recent years because the agency encouraged states in its May 1996 National Nonpoint Source Program guidance to use section 319 moneys to fund eligible activities that might have been funded in previous years under section 314. About $16.6 million of section 319 funds have been used to perform lake and reservoir work. Allocation Method: Under this program, EPA uses a formula, a competitive process, and other processes to allocate funds to states. EPA used a formula to allocate a portion of the appropriated section 314 funds to each of its regions, taking into account several factors such as the number of states per region, number of lakes/reservoirs, land use, and nonpoint pollution problems. Each region then awarded its portion of the funds on a competitive basis. In addition, the Congress may include funding to a specific lake project as a separate line item in the budget. Section 320: National Estuary Program (EPA did not report nonpoint source-related obligations for this section, noting that the program does not specifically focus on nonpoint pollution and therefore does not track obligations in that way—total appropriated funding was $60.3 million for fiscal years 1994 through 1998.) Overall Objective: The National Estuary Program’s overall objective is the attainment or maintenance of water quality in the nation’s estuaries to ensure protection of public water supplies and the protection and propagation of a balanced, indigenous population of shellfish, fish, and wildlife. The program is designed to encourage local communities to take responsibility for managing their estuaries by encouraging stakeholders, including federal, state, and local government agencies, citizens, business leaders, educators, and researchers, to (1) work together to identify problems in the estuary, (2) develop specific actions to address those problems, and (3) create and implement formal management plans. Allocation Method: EPA recently revised its formula for allocating program funds to state and local governments, nonprofit organizations, and regional planning organizations. Initially, EPA created size distinctions and provided higher levels of funding for large estuary projects. This size distinction was phased out in fiscal year 1998 because experience with older programs revealed that small estuaries can be just as complex as large estuaries depending on such things as priority problems, the current state of knowledge of the estuary, and cultural diversity. In addition, EPA created a staged funding approach: programs developing a Comprehensive Conservation and Management Plan for the estuary received more funding than programs in plan implementation. Every year, EPA develops specific funding guidance that explains how funds will be allocated. FY 1994-1998 obligations for nonpoint (total appropriated) $232 To provide flexible technical, educational, and financial ($530) assistance to producers that face the most serious threats to soil, water, and related natural resources. $80.83 To cooperate with state and local agencies in planning ($585.41) and carrying out work to improve soil conservation and for other purposes—such as flood prevention, and the conservation, development, and utilization of water. $21.68 To provide statistically valid information for agricultural ($94) and environmental program and policy development, implementation, and evaluation. $3.89 To maintain soil and water resources in the 10 Great ($40.7) Plains States by installing corrective practices. Consolidated into EQIP in 1996. $5.52 To reduce the amount of salt loading to the Colorado ($20.96) River from surface runoff and subsurface percolation of irrigation water that carries the salt in solution to the river. Consolidated into EQIP in 1996. $218.6 To protect, restore, and enhance the functions and values of wetland ecosytems. (549.8) To remove certain incentives for persons to produce agricultural commodities on highly erodible land or converted wetland. $1,710.89 To cost effectively reduce water and wind erosion, protect ($8,700) the nation’s long-term capability to produce food and fiber, reduce sedimentation, improve water quality, create and enhance wildlife habitat, and encourage more permanent conservation practices and tree planting. ($369.65) $12.29 To help prevent soil erosion and water pollution, protect and improve productive farm and ranch land, conserve water used in agriculture, preserve and develop wildlife habitat, and encourage energy conservation measures. Consolidated into EQIP in 1996. $35.68 To rehabilitate farm land damaged by natural disaster ($207.0) and to carry out emergency water conservation measures during periods of severe drought. (continued) FY 1994-1998 obligations for nonpoint (total appropriated) $5.19 To increase the quantity and quality of science applied to the needs of agriculture and forestry. ($456.3) $5.7 To provide educational and technical assistance ($26.9) programs for voluntary farmer adoption of improved management practices to enhance or protect water quality. $2.46 To measure the impact of farming systems on water ($20.38) quality, identify processes that control fate and transport of chemicals and other contaminants, and determine social and economic impacts of alternative management systems. $.006 To address agricultural nonpoint source pollution problems in watersheds. (0) $11.30 To conduct long-term studies of the effects of natural ($69.46) events and land management activities on water quality, quantity and timing to provide a scientific basis for land managers’ efforts to protect and restore watershed and riparian ecosystems. $59.2 To measure the impact of farming/ranching practices and ($273.8) systems on water quality; identify processes that control fate and transport of chemical and other contaminants; develop cost-effective, alternative farming/ ranching practices and systems for all nonpoint source contaminants including salts, toxic trace elements, nutrients, pesticides, pathogens, and other waterborne diseases; deliver technologies, models, decision support systems, and management information to enhance or protect water quality. $24.36 To restore habitat for federal trust species through voluntary agreements with private landowners. ($97.87) ($5.58) $0.86 To protect and enhance the quality of the habitat and environment on which fish and wildlife trust resources depend, and provide recommendations and support state and other federal agencies in implementing management actions to resolve contaminant problems. (continued) FY 1994-1998 obligations for nonpoint (total appropriated) ($9.5) $1.4 To protect and enhance the quality of the habitat and environment on which fish and wildlife trust resources depend, and provide recommendations and support refuge managers in implementing management actions to resolve contaminant problems. 0 To install pumpout stations for the removal of sewage ($40) from boats with holding tanks and portable toilets and to educate boaters on the need for using pumpout and dump stations and where these facilities are located. $0.30 To minimize injuries to Fish and Wildlife-managed resources. (0) $13.41 To provide for the protection of watershed values (such ($91.50) as soil stability) and air quality on the public lands; reduce salinity and runoff from the public lands to protect water quality; provide for the legal availability of water on public lands; provide information for public lands, watersheds, and air resources; and support BLM’s “Riparian Wetlands Initiative.” $32.61 To manage public rangelands to ensure their long-term health, natural diversity, and productivity. ($248) $9.88 To enhance riparian/aquatic habitat to improve water ($73.58) quality and to complete the proper functioning assessments of natural indicators and characteristics of riparian areas in the lower 48 states by implementing the “Clean Water and Watershed Restoration Initiative.” $17.64 To manage the following types of resources (excludes ($143.44) forest management): recreation; wildlife habitat and fisheries; soil, water, and air; and rangeland. This program is a portion of a larger activity to manage resources on Oregon and California grant lands in western Oregon. $54.58 To identify the status and trends in water quality ($300.81) conditions for major water resource areas (surface and groundwater) and the human and natural conditions that cause existing water quality conditions; and communicate findings to resource managers and policy makers. $2.99 To provide a nationwide, long-term record of spatial and temporal trends in atmospheric deposition. ($8.75) (continued) FY 1994-1998 obligations for nonpoint (total appropriated) $128.09 To restore lands mined and abandoned or left ($695.85) inadequately reclaimed prior to Aug. 3, 1977, thereby protecting society and the environment from the adverse effects of surface coal mining operations. $2.52 To clean streams and rivers polluted by acid and toxic drainage form abandoned coal mines. ($6.52) $15.52 To prevent any further degradation of the Colorado River and limit damages. ($85.53) $2.24 To protect and restore coastal waters and help states ($10.0) establish enforceable programs for comprehensively addressing the most significant sources of nonpoint pollution. $5.15 To encourage states to manage their coastal land and water resources. ($229.1) $20.34 To maintain and sustain training lands. These actions ($95.12) indirectly contribute towards preventing nonpoint source pollution. (Table notes on next page) The Environmental Quality Incentives Program combines several of USDA’s conservation programs—the Agricultural Conservation Program (including Water Quality Incentives Projects), the Colorado River Basin Salinity Control Program, and the Great Plains Conservation Program. These programs received partial appropriated funding in fiscal year 1996 before being consolidated. In addition, some of these programs had outlays in later years in order to service prior year contracts. The Environmental Quality Incentives Program and the Conservation Reserve Program do not receive appropriations. These programs are funded through the Commodity Credit Corporation. The Wetland Reserve Program began receiving funds through the Commodity Credit Corporation for fiscal year 1997. USDA did not provide dollar amounts for this program. Instead, USDA identified 4,720 full time equivalents out of a total of 11,800 that could be considered as helping to reduce nonpoint source pollution. No funds were appropriated to this program during this period. Funds used to address nonpoint pollution were entirely from full-time staff equivalents. DOD only reported obligations for this program for fiscal years 1996 through 1998. According to the Department, prior to this, the program was managed by a different office, and expenditures were not tracked in a way that allowed for separating funding obligated for nonpoint source-related activities. The following are GAO’s comments on the Department of Agriculture’s (USDA) letter dated January 29, 1999. Several of USDA’s services provided clarifications and technical points that were incorporated into the report as appropriate. Within the letter, there are 21 points on which we provide the following comments. 1. The Natural Resources Conservation Service (NRCS) said that the information in the executive summary indicating that USDA programs represent almost 80 percent of the funding identified for nonpoint source pollution is misleading because, as the draft points out later, its largest program—the Conservation Reserve Program—has no specific nonpoint source objectives. NRCS suggested that certain information in the body of the report be reflected in the executive summary to clarify that while activities under the program do in fact address nonpoint source pollution, nonpoint source pollution control is not a stated objective of the program. We have made these changes as suggested. 2. NRCS commented that an example in the draft report where Arizona officials reported that activities on federal lands contribute to 50 percent of the water quality problems in the state provides no indication of the relative size of the federal contribution to these waters. This information was provided by state officials who are required by the Clean Water Act to routinely assess their waters for water quality problems and identify contributing sources. While they do not quantify the contribution of individual sources to impaired waters, Arizona officials did indicate that federal activities were the “primary” source of 50 percent of the water quality problems in the state. We have added this distinction to the report. 3. NRCS requested that we revise the language in the draft to clarify that water quality is not the sole purpose of funding for EQIP and the Conservation Reserve Program, noting that environmental benefits can include water quality, but may not have this benefit in some locations. We have clarified the report where appropriate. However, we asked agencies to report on programs that in their opinion helped address nonpoint source pollution. By including programs in this report, we are not suggesting that all the programs focused exclusively on nonpoint source pollution. We recognize that some programs simply help reduce nonpoint source pollution through the implementation of other program objectives. 4. NRCS suggested that we add an item to our graphic depicting possible sources of nonpoint source pollution in a watershed showing “all vehicle traffic” as an additional possible source. We agree that vehicle traffic is another possible source of nonpoint pollution, however, our graphic was not intended to include every pollution source. 5. See comment 1. 6. NRCS commented that to say that all funds for the EQIP program went to nonpoint source may be “stretching it, since some areas do not have enough rainfall to have runoff or be a source.” We reported that 100 percent of EQIP funding addressed nonpoint source pollution based on information from the agency. The rationale provided by the agency in response to our questionnaire noted that, “EQIP is intended to solely address nonpoint source pollution from farms and ranches.” In addition, we discussed the issue of percent of program funds targeted to addressing nonpoint source pollution several times with agency officials to be sure that the 100-percent figure was appropriate. Moreover, one conservation official addressed the issue of lack of rainfall by pointing out that such areas will either (1) not be capable of producing crops and, therefore, not be eligible for funding or (2) be irrigated, making runoff a possibility. 7. NRCS commented that EQIP should not be characterized as a nonpoint source pollution-reduction program. As discussed in comment 6, we reported information on the program based on information the agency provided in response to our questionnaire. To avoid any confusion, we have revised the text in the report to reflect language in the final rule as suggested by the Service. 8. The draft did not include the two programs cited in this comment, the Wetlands Reserve Program and the Forestry Incentives Program, because agency officials initially indicated that neither program met our criteria for inclusion. We included information on the Wetlands Reserve Program provided later by USDA in appendix II; however, no program and funding data were provided for the other program. 9. NRCS commented that the section heading, “Federal Activities That Contribute Significantly to Nonpoint Source Pollution,” leaves the impression that all activities cause nonpoint source pollution. NRCS suggested that the heading be reworded to reflect that activities contribute when not properly managed, and remove the word “significant.” We agree that water quality impacts can be minimized by the use of appropriate management practices and discuss some of these practices in each of the activity sections. However, such practices may not always be in place. We have revised the heading to acknowledge that all the activities do not necessarily contribute to nonpoint source pollution, but rather “have the most potential” to contribute. We have left the reference to “significant” contributions because this section discusses the activities that federal and state officials identified as those with the potential to be the most significant contributors. 10. NRCS questioned the example that “30 percent of all impaired waters in the state of Oregon are due to grazing.” We reported that “federally authorized grazing contributes to the degradation of about 30 percent of all impaired waters in the state.” This information was obtained from the state nonpoint source pollution program manager based on the state’s list of impaired waters. As discussed in comment 2., states routinely assess their waters for water quality problems and identify the sources contributing to the problems, as required by the Clean Water Act, but do not quantify the contribution of individual sources. 11. NRCS commented that two of the programs included in the draft did not address nonpoint source pollution, nor was it a collateral benefit of the programs. As discussed in comment 8., we included information provided by the respective agency program officials. Regarding the National Resource Inventory, the agency said that the program addressed nonpoint source pollution because it collects data on agriculturally related natural resource elements that can be used to provide some measure of nonpoint source pollution rates. For the Watershed Protection and Flood Prevention Program, the agency said that, among other objectives, the program is intended to improve or enhance water quality and quantity and that “about 975 watershed projects have a significant impact on nonpoint source pollution.” 12. The Agricultural Research Service (ARS) commented that we did not address the adequacy of scientific understanding of nonpoint source pollution. Such an analysis was outside the scope of this review. 13. ARS also commented that there was inconsistency in the type of programs addressing nonpoint source pollution identified in our report. See comments 8 and 11 for information regarding how we identified programs for inclusion in the report. 14. We have added information on ARS’ Water Quality/Research, Development, Information Program, as requested. 15. The Forest Service suggested that the relationship between the magnitude of federal lands and the proportion of nonpoint source pollution should be conditioned in terms of potential rather than actual, noting that management practices intended to minimize nonpoint source pollution are prescribed for all Forest Service projects. As discussed in chapter 4, information obtained from the states we contacted does in fact show that a significant amount of water quality problems can be attributed, at least in part, to activities occurring on federal land. However, we acknowledge the variability in this relationship, noting that the degree of pollution in specific areas may depend on site-specific characteristics such as geographic and hydrologic conditions, the type of activities occurring and intensity of use, and management practices applied to minimize impacts. Accordingly, as suggested by the Forest Service, we modified language in this chapter where appropriate to characterize the association between a large portion of federally owned land to contributing a significant amount of nonpoint pollution as potential rather than actual. 16. As an additional point, the Forest Service provided data to show how silvicultural activity is occurring on just a small part of national forest lands. We did include information regarding the decline of silvicultural activities in the report; however, Forest Service research has shown that pollution from harvest sites may continue for decades after a harvest has been completed. In addition, silviculture is just one of the many activities occurring on Forest Service land that may lead to nonpoint source pollution. While federal agencies are implementing practices to minimize water quality impacts from current activities, agencies must also deal with impacts resulting from past activities and practices. In several sections of chapter 4, we acknowledge that past practices contribute to water quality impacts. 17. The Forest Service commented that it devotes more resources to addressing nonpoint source pollution than is reflected in the one program included in our report—the Watershed Research Program. The Service said that the control of nonpoint source pollution is the responsibility of each resource program manager. While the Service did not provide cost estimates for these activities, we have noted this comment in the report. 18. The Cooperative State Research, Education, and Extension Service commented that we did not discuss the research needs associated with nonpoint source pollution. Assessing the adequacy of funding for nonpoint source pollution research was outside the scope of this review. 19. The Extension Service encouraged coordination among EPA and other USDA agencies within the Department with regard to watershed-based modeling research, but noted that NRCS was the only agency we discussed in the report. We agree that all relevant agencies in USDA should coordinate research on nonpoint source pollution modeling to avoid duplication and help move scientific understanding of the problem forward as efficiently as possible. We included NRCS in our report because it was one of the few federal agencies that had developed a nationwide model relevant to our evaluation of EPA’s nonpoint source control modeling approach. 20. The Extension Service suggests that we examine biases in the states’ evaluation of surface water quality problems. Such an analysis was outside the scope of this review. 21. The Extension Service also makes some observations on, and criticisms of, the Clean Water Action Plan and how it can be used as a means to further address nonpoint source pollution issues. We provided factual information about the Clean Water Action Plan since several of its components address nonpoint source pollution, in particular funding increases for several of the programs included in our report. However, an analytical evaluation of the Action Plan (including the assumptions made regarding the current understanding of water quality problems and associated research and monitoring needs) was beyond the scope of this review. The following are GAO’s comments on FERC’s comments on our draft report. The Commission agreed with the report’s major conclusions, but raised three concerns regarding how hydropower is characterized in the report. The Commission also made several clarifications and technical points that were incorporated into the report as appropriate. Our comments to the Commission’s three major concerns follow. 1. FERC expressed concern that a lay reader would misconstrue the word “hydromodification” or think that the term is interchangeable with “hydropower.” We believe we have properly defined hydromodification to make it clear that hydropower is just one example of hydromodification activities. In each instance where we introduce the term hydromodification, we refer to the major categories of hydromodification—channelization and dams and reservoirs. In addition, we provide explanations of the types of projects included in each of the categories. For example, in the Results in Brief, we provide the example for hydromodification, “such as building and operating dams, or modifying rivers for flood control and other purposes.” Similarly, in the first paragraph of the hydromodification section, we describe hydromodification activities as “channelization and the construction and operation of dams.” Later, in the subsection on dams and reservoirs, we describe such structures as being “multipurpose, such as providing municipal and industrial water supply, flood control, recreation, irrigation, and power generation.” 2. FERC believes that we have misrepresented hydropower as a nonpoint source of pollution, stating that “hydropower is not a nonpoint source of pollutants, but rather an activity that can positively or negatively affect the impacts of pollutants introduced by nonpoint sources.” However, as described in an EPA technical document regarding management measures for sources of nonpoint pollution, dams (which can be constructed for many purposes including flood control, power generation, irrigation, and municipal water supply) “can generate a variety of types of nonpoint source pollution in surface waters.” Examples of such pollution are discussed in our report such as increased downstream erosion and changes in water temperature and dissolved oxygen levels that may impact aquatic life. FERC acknowledges in its comments that hydropower projects do have these negative effects. Therefore, in these instances, we believe it is appropriate to portray hydropower as an original source of nonpoint pollution. However, we acknowledge that most of our examples regarding the impacts of hydromodification are hydropower examples and may have overemphasized the negative impacts of hydropower in this section. We have revised the text to recognize that the impacts discussed may result from any of the types of hydromodification, not just hydropower projects. 3. The Commission commented that the draft does not distinguish between federally operated projects and Commission-licensed projects, which are generally smaller and, therefore, should not be represented as having the same environmental impacts. The draft did, in fact, distinguish between Commission-licensed projects and federally operated projects, noting the number of projects of each and, in particular, the environmental requirements to which the nonfederal projects are subject. Moreover, while we acknowledge FERC’s point about the relatively smaller size of FERC-licensed projects (.09 billion kilowatt hours per year versus .9 billion kilowatt hours per year for federally operated projects), we would point out that there is a considerably greater number of these smaller projects nationwide—1,750 FERC-regulated projects versus 133 federally operated projects. Beyond this distinction, however, we would add that in many respects, the types of impacts described apply generically to dam and reservoir operations regardless of whether it is a FERC-licensed project, a federally operated project, or whether the project’s primary purpose is for a use other than hydropower. In addition, as with the other sources of nonpoint pollution, the extent of the potential impact varies significantly with site-specific characteristics and management practices employed at the project. The following are GAO’s comments on the Department of the Interior’s letter dated January 26, 1999. Additional specific comments were provided by the individual services and bureaus within Interior and have been addressed as appropriate. Many of these specific issues are also discussed at the end of chapters 2, 3, and 4. Our comments on the Department’s two major concerns follow. 1. Interior expressed concern that the draft report appeared to equate the magnitude of nonpoint source pollution to the amount of federally managed land involved. As discussed in chapter 4, information obtained from the states that we contacted does in fact show that a significant proportion of water quality problems can be attributed, at least in part, to activities occurring on federal land. However, we acknowledge the variability in this relationship, noting that the degree of pollution in specific areas may depend on site-specific characteristics such as geographic and hydrologic conditions, the type of activities occurring and intensity of use, and management practices applied to minimize impacts. Accordingly, where appropriate, we modified language in this chapter to characterize the contribution to nonpoint source pollution from federal lands as potential rather than actual. 2. Interior also points out that federal land managers are working diligently to develop and implement new land management practices which will conserve our natural resources and reduce the impacts of the activities they conduct or permit on water resources. We agree that water quality impacts can be minimized by the use of appropriate management practices and discuss some of these practices in each of the activity sections. However, such practices may not always be in place. Moreover, as pointed out by federal and state officials, as well as by Forest Service research, water quality impacts continue to result from past management practices, such as the type of heavy grazing that occurred in the late 1800s and past timber harvesting methods. The following are GAO’s comments on the Department of Commerce’s letter dated February 2, 1999. The Department provided a few technical clarifications which were incorporated into the report as appropriate. Our comments on the Department’s two concerns follow. 1. Report modified as suggested. 2. The Department commented that in appendix II, we did not have complete data for the Coastal Nonpoint Pollution Control Program. Commerce clarified that additional program funding, $1 million, was provided by EPA for fiscal year 1998. We have added the additional funding data and its source. Jennifer Clayborne Michael Daulton Steve Elstein Tim Guinane Karen Keegan Patricia Macauley McClure The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the impacts of nonpoint source water pollution and the potential costs of dealing with the problem, focusing on: (1) funding levels for federal programs that primarily address nonpoint source pollution; (2) the way Environmental Protection Agency (EPA) assesses the overall potential costs of reducing nonpoint pollution nationwide and alternative methods for doing so; and (3) nonpoint source pollution from federal facilities, lands, and activities that federal agencies manage or authorize, or for which they issue permits or licenses. GAO noted that: (1) the federal agencies GAO contacted reported spending about $3 billion annually for fiscal years 1994 through 1998 on 35 programs that they identified as addressing nonpoint source pollution; (2) some deal directly with nonpoint source pollution; others focus on different objectives but still address the problem; (3) while EPA is the primary agency involved in water quality issues given its role under the Clean Water Act, many other federal agencies have programs addressing nonpoint source pollution and, in some cases, devote a significant amount of resources to the problem; (4) in particular, the Department of Agriculture's (USDA) programs account for over $11 billion of all federal funding identified by these agencies; (5) USDA officials explain that while most of the programs identified by the agency do not have specific nonpoint source pollution objectives, the programs' activities nonetheless help to reduce nonpoint source pollution; (6) EPA has estimated the annual costs of controlling three major sources of nonpoint source pollution to be $9.4 billion, an amount that represents one of the few systematic attempts at estimating such costs nationwide; (7) specifically, EPA's methodology to produce the estimate analyzes agriculture, silviculture, and animal feeding operations and estimates pollution-control costs for these sources; (8) EPA acknowledges that the methodology has several limitations; (9) GAO also found that the methodology does not assess and disclose the considerable range of uncertainty associated with EPA's control cost estimate and that it includes insufficient documentation of its cost-estimation methodology; (10) EPA officials told GAO that the agency is considering an additional cost-estimation methodology, a watershed based approach, that could provide a substantially more realistic estimate by taking into account the unique characteristics of individual watersheds; (11) the federal government manages or authorizes a variety of activities that result in nonpoint source pollution and, in some cases, affect water quality; and (12) the following five activities have been identified as those with the most potential to contribute significantly to nonpoint source pollution: (a) silviculture; (b) grazing; (c) drainage from abandoned mines; (d) recreation; and (e) hydromodification.
Congress created GSEs to help make credit available to certain sectors of the economy, such as housing and agriculture, in which the private market was perceived as not effectively meeting credit needs. GSEs receive benefits from their federal charters that help them fulfill their missions. Freddie Mac and Fannie Mae (the housing enterprises) have federal charters granting each of them explicit benefits, which include (1) exemption from registering their securities with the Securities and Exchange Commission (SEC), (2) exemption from state and local corporate income taxes, and (3) use of the Federal Reserve as a transfer agent. Farmer Mac is subject to SEC registration requirements, but it uses the Federal Reserve as a transfer agent, and Farmer Mac officials told us that it is exempt from state income taxes in most states. The most important benefit that all three enterprises receive is an implicit one stemming from investors’ perception that the federal government would not allow the enterprises to default on their obligations. Due to this perception, investors do not demand yields on investments in enterprise debt and mortgage-backed securities that are as high as those on comparable financial instruments issued by corporations without government sponsorship. One result of government sponsorship, therefore, is a reduction in debt costs compared with debt costs in similar corporations without government sponsorship. Freddie Mac and Fannie Mae were chartered by Congress to enhance the availability of residential mortgage credit across the nation. The housing enterprises accomplish this mission by purchasing residential mortgages from lenders. The housing enterprises retain some of the mortgages they purchase in their own portfolios, but a majority of the mortgages are pooled into mortgage-backed securities (MBS) that are sold to investors in the secondary residential mortgage market. As of December 1996, Freddie Mac had about $463 billion in MBS obligations and $156 billion in debt obligations outstanding. The corresponding figures for Fannie Mae were about $548 billion and $331 billion, respectively. Therefore, combined MBS and debt obligations of these housing enterprises totaled about $1.5 trillion. Farmer Mac was chartered by Congress to enhance the availability of agricultural mortgage credit across the nation. Farmer Mac is making efforts to establish a secondary mortgage market for agricultural mortgages along the lines the housing enterprises have established for residential mortgages. Farmer Mac issues, and guarantees payment on, agricultural mortgage-backed securities (AMBS). One type of AMBS, called Farmer Mac I securities, is backed by agricultural mortgages not containing federally provided primary mortgage insurance. The other type of AMBS, called Farmer Mac II securities, is backed by agricultural mortgages containing primary mortgage insurance provided by the Department of Agriculture. Farmer Mac is a small financial institution in comparison to the housing enterprises. As of December 31, 1996, Farmer Mac had about $642 million in AMBS (of which about $226 million were owned by others, and about $416 million were held by Farmer Mac) and about $546 million in debt obligations outstanding. Therefore, combined AMBS owned by others and debt obligations of Farmer Mac totaled about $772 million. The housing enterprises pass along, at least in part, the benefits they receive from government sponsorship, such as lower debt costs, to residential borrowers. In a previous study, we estimated that interest rates on single-family, fixed-rate, conforming mortgages were probably 15 to 35 basis points lower than they would have been without government sponsorship of the enterprises. Limiting the activities of the housing enterprises primarily to funding conforming residential mortgages helps create a mechanism for the benefits they receive, such as lower debt costs, to be passed through to borrowers. Such limitations are consistent with the special purpose charters imposed by Congress. Congress gave the Department of Housing and Urban Development (HUD) general regulatory authority over the housing enterprises so that HUD can ensure that the missions of these enterprises as stated in their respective charter acts are being fulfilled. HUD also has regulatory authority to approve new mortgage programs proposed by the housing enterprises. In consideration of the potential risks to taxpayers from an enterprise default on its financial obligations, Congress created safety and soundness regulators for the enterprises. HUD’s Office of Federal Housing Enterprise Oversight (OFHEO) is the safety and soundness regulator of the housing enterprises. The Farm Credit Administration (FCA), through its Office of Secondary Market Oversight (OSMO), has regulatory responsibility with respect to Farmer Mac, including specific authority over safety and soundness matters. We reviewed the enterprises’ charters and relevant statutes to examine the enterprises’ legal authority for making nonmortgage investments and regulatory oversight of that activity. We obtained and analyzed publicly available and proprietary information on the enterprises’ investment policies, practices, and justification of those policies and practices to examine the relationship between nonmortgage investment policies and practices and missions. We reviewed literature on the role of the housing enterprises in the residential mortgage market to examine the extent to which the enterprises have undertaken nonmortgage investments for arbitrage profits. We also interviewed officials at the enterprises, HUD, OFHEO, and FCA; and we received written responses to questions submitted to the Department of the Treasury. We obtained and analyzed information the enterprises considered to be proprietary that included information packages prepared for board members of the enterprises; detailed information on nonmortgage investments, their yields, and maturity; yield and other characteristics of enterprise debt issued to fund the nonmortgage investments; and compensation policies for senior officers and board members. We do not report specific details of the enterprises’ investment policies and practices or compensation policies because of the proprietary nature of such enterprise information. Our interviews with officials at OFHEO and FCA on their regulatory oversight of nonmortgage investments included discussion of proprietary information relied upon by the regulators in making their safety and soundness determinations regarding nonmortgage investments. We did not verify their findings leading to the safety and soundness determinations. Generally, the financial practices that the housing enterprises used to limit the interest rate and credit risks of their nonmortgage investments were fairly straightforward. From the data we collected at the housing enterprises and interviews with housing enterprise and OFHEO officials, we obtained a general understanding of OFHEO’s determinations. In contrast, the financial practices that Farmer Mac used to limit the interest rate risk of its nonagricultural-mortgage investments were not as straightforward and were not fully captured by the specific data we collected from Farmer Mac. Therefore, we were not able to obtain as complete an understanding of FCA’s determinations. We obtained written comments on a draft of this report from each of the three enterprises, HUD, OFHEO, FCA, and Treasury. Their comments are discussed near the end of this report and are reprinted in appendixes III through IX. We conducted our work in Washington, D.C., from April 1997 through October 1997 in accordance with generally accepted government auditing standards. The charters of all three enterprises provide them with broad investment powers. OFHEO has clear authority to regulate investments by the housing enterprises if such investments pose a safety and soundness concern. HUD has general regulatory authority over the housing enterprises and is charged with making such rules and regulations as shall be necessary and proper to ensure that the purposes of the respective charter acts are accomplished. In addition to general regulatory authority, HUD also has authority to approve new mortgage programs that could contain nonmortgage investment components. FCA, through OSMO, has safety and soundness and general regulatory authority with respect to Farmer Mac. “to enter into and perform contracts, leases . . . or other transactions, on such terms as it may deem appropriate . . . to lease, purchase, or acquire any property, real personal or mixed . . . and to sell, for cash or credit, lease, or otherwise dispose of the same, at such time and in such manner as and to the extent that it may deem necessary or appropriate, . . . and to do all things as are necessary or incidental to the proper management of its affairs and the proper conduct of its business.” The Farmer Mac charter act empowers it to, among other things, “ . . . purchase or sell any securities or obligations . . . necessary and convenient to the business of the Corporation.” One general rule of law is that a corporation’s powers can be no broader than the purposes for which the corporation is organized. This rule is particularly relevant where, as in the case of the enterprises, the corporation is organized for special, as opposed to general, purposes. Thus, even though the enterprises have broad investment powers, the exercise of those powers should not be unrelated to the accomplishment of the special purposes for which the enterprises were created. Under general corporate law, this relationship has been described as the logical relation of the activity to the corporate purpose expressed in the charter. OFHEO, as safety and soundness regulator, is charged with ensuring that the housing enterprises are adequately capitalized and operate safely and in accordance with the Federal Housing Enterprises Financial Safety and Soundness Act of 1992 (the 1992 Act). OFHEO has regulatory and enforcement authority, without the review or approval of HUD, with respect to matters generally related to enterprise safety and soundness and to a few specific matters, including certain capital distributions and executive compensation at the enterprises. Therefore, OFHEO has authority to supervise an enterprise investment that affects the enterprise’s safety and soundness without consultation with HUD. Actions taken by OFHEO with respect to other matters not specified in the 1992 Act as exclusive to OFHEO are subject to the review and approval of the Secretary of HUD. FCA, through OSMO, has regulatory responsibility for Farmer Mac. Among other things, OSMO is responsible for ensuring that Farmer Mac holds adequate capital for the activities it performs and operates in a safe and sound manner. OSMO is also responsible for supervising the safety and soundness of Farmer Mac’s program and investment activities. OFHEO has concluded that each housing enterprise’s nonmortgage investment policies and practices have not constituted a safety and soundness concern. Its conclusion was largely based on how each enterprise matched the maturities (and related characteristics) of its debt obligations used to finance its nonmortgage investments with those investments and the high credit standards and generally short maturities of the nonmortgage investments. As of April 1997, OSMO concluded that Farmer Mac’s nonagricultural-mortgage investment activities had not raised a safety and soundness concern. OSMO found that the size of Farmer Mac’s investment portfolio was not unsafe and unsound relative to the statutory capital requirement, and the composition of the portfolio was not unsafe or unsound. Although OFHEO and FCA have examined safety and soundness implications of nonmortgage investments, HUD and FCA told us that prior to mid-April 1997 they had not focused on nonmortgage investment policies and practices in carrying out their general regulatory authority with respect to the enterprises’ charter missions. The scope of HUD’s general regulatory authority as it relates to nonmortgage investments is not clearly defined in statute. However, as discussed later in this report (see p. 11), HUD has initiated action to determine how it should implement this authority. FCA has general regulatory authority that would allow oversight of Farmer Mac’s investment activities. However, FCA said it has no activities under way that are expected to culminate in regulation of Farmer Mac’s investments. Section 1321 of the 1992 Act provides that except for the specific powers granted OFHEO, HUD has “general regulatory power” over each housing enterprise. HUD also is charged with making “such rules and regulations as shall be necessary and proper to ensure” that the provisions of the 1992 Act concerning new mortgage programs and housing goals and the purposes of the respective charter acts are accomplished. The scope of HUD’s authority under this section is not defined. With respect to investments, the statute does not set forth any criteria other than the charter acts themselves as a basis for HUD’s exercise of its general regulatory power and rulemaking authority. As discussed previously, the charter acts provide Fannie Mae and Freddie Mac with broad authority to make investments. This raises a question about the extent to which HUD has authority to regulate nonmortgage investments by the housing enterprises. “It is the intent of the committee that the regulatory powers of the Secretary will not extend to (the enterprise’s) internal affairs, such as personnel, salary, and other usual corporate matters, except where the exercise of such powers is necessary to protect the financial interests of the Federal Government or as otherwise necessary to assure that the purposes of the (charter act) are carried out.” Fannie Mae asserted that its investment practices are internal corporate affairs subject to its broad discretion. Thus, according to the enterprise, the above-quoted legislative history and other Congressional statements indicate Congress’ intention that HUD should not exercise its general regulatory authority with respect to Fannie Mae’s investment activities except in the “extreme situation” where those activities endanger its statutory mission. It is unclear that Congress intended to limit HUD’s authority with respect to nonmortgage investments, particularly in light of the special purposes of the housing enterprise charters and the broad statutory language establishing the Secretary’s general regulatory power and rulemaking authority. But even if, as Fannie Mae contends, nonmortgage investments are usual corporate matters, HUD could take regulatory action, such as requiring reports of nonmortgage investment activities, in cases where HUD appropriately determines the action is necessary to ensure the accomplishment of the enterprises’ charter acts. Since April 1997, HUD has been evaluating the scope of its authority with respect to the mission-relatedness of enterprise investments. HUD officials said they are considering a range of possible regulatory standards for enterprise investments that could be appropriate and within the scope of HUD’s statutory authority. On the one end of the range being considered is a narrower standard based on an enterprise activity being reasonably related to the enterprise’s mission; and on the other end is a broader standard based on an activity not conflicting with the enterprise’s mission. HUD’s mission regulation actions since the passage of the 1992 Act have focused on developing numeric goals governing enterprise purchase of mortgages serving very-low-, low-, and moderate-income households and other underserved borrowers; promulgating rules containing numeric goals; and enforcing the numeric standards. HUD officials told us that the activities of HUD’s Office of Government Sponsored Enterprises Oversight have continued to focus on the numeric goals and fair lending issues. HUD officials said that they had not focused attention on nonmortgage investment practices at the enterprises prior to the mid-April 1997 public disclosure of and publicity surrounding Freddie Mac’s nonmortgage investment in long-term Phillip Morris bonds. At that time, HUD requested information from Freddie Mac on its nonmortgage investments and received a reply from Freddie Mac on April 28. In our August 1997 discussion with HUD, officials told us they have decided to use their general regulatory authority to request reports from the housing enterprises on their investment policies and practices. HUD’s plan is to monitor investment trends so that it can determine if the investments are consistent with the enterprises’ missions and purposes as defined in their charters. On November 13, 1997, HUD’s Director of Government Sponsored Enterprises Oversight made her first request for a report on nonmortgage investment activity from the housing enterprises. In August 1997, HUD told us it had reached the decision to begin a rulemaking effort by publishing an advance notice of proposed rulemaking soliciting comments on how HUD should carry out its general regulatory authorities with respect to nonmortgage investments by the housing enterprises. HUD received executive branch approval and published the advance notice on December 30, 1997. FCA has general regulatory authority over Farmer Mac. Under the Farm Credit Act of 1971, FCA has general regulatory authority over institutions in the Farm Credit System, one of which is Farmer Mac. FCA officials told us that the agency implements this authority through OSMO. As required by statute, the Director of OSMO is selected by and reports to the FCA Board.Moreover, the statute charges FCA with ensuring that OSMO is adequately staffed to supervise Farmer Mac’s secondary market activities, although, to the extent practicable, the personnel responsible for supervising the corporation should not also be responsible for supervising the banks and associations of the Farm Credit System. This regulatory structure provides for a degree of separation between FCA’s general regulatory responsibilities and its safety and soundness responsibilities with respect to Farmer Mac. However, the structure does not appear to limit FCA’s general regulatory authority. During our review, we conducted three interviews with FCA and OSMO officials that included discussion of general regulatory authorities as they apply to nonagricultural-mortgage investments. Over the course of these interviews, we observed an evolution in their thinking on this topic. At the beginning of our review, the OSMO director told us that its focus in examining nonagricultural-mortgage investments had been on matters pertaining to safety and soundness. Toward the end of our review, it appeared to us that FCA and OSMO officials began to focus some attention on the relationship between nonagricultural-mortgage investments and mission achievement. In October 1997, FCA indicated that, for now, it did not have concerns that Farmer Mac’s nonmortgage investment activity is inconsistent with its charter mission. However, FCA also stated that the debt issuance strategy associated with the investments is intended to be temporary and to develop over a reasonable period of time. Therefore, according to FCA, its position could change if over time evidence does not show that such investments play a role in helping Farmer Mac achieve its mission. The enterprises may at times propose new mortgage programs that contain nonmortgage investment components. In addition to its general regulatory authority, HUD also has regulatory authority to approve new mortgage programs proposed by the housing enterprises. HUD used this authority to review Fannie Mae’s proposed mortgage protection plan (MPP), which it approved on June 23, 1997. On that date, OFHEO’s acting director provided the Secretary of HUD a letter with his determination that MPP would not create a “risk of significant deterioration of the financial condition” of Fannie Mae; this determination is required for the Secretary of HUD’s approval. Under the proposed program, Fannie Mae would purchase a cash value life insurance policy—essentially a nonmortgage investment—on a first-time homebuyer after the selected borrower’s residential mortgage was purchased by Fannie Mae and the borrower agreed to accept such coverage. The policy would protect Fannie Mae and the homebuyer against the risk that the mortgage would not be paid due to the borrower’s death. The policy also would offer limited protection against default and foreclosure due to disability and job loss. Due in part to potential tax benefits available under current tax law when HUD approved MPP, and in part to Fannie Mae’s relatively low cost of capital, Fannie Mae expected that MPP would be profitable. Since HUD’s approval, however, a new tax bill was signed into law that, according to Treasury, substantially reduced the tax benefits that were available to Fannie Mae under the MPP. Fannie Mae officials told us that Fannie Mae has decided not to go forward with the program. In commenting on a draft of this report, HUD stated that it did not possess detailed knowledge of the intricacies of the life insurance industry at the time MPP was submitted for review. We did not see evidence that HUD provided Fannie Mae’s MPP proposal to anyone with experience in evaluating cash value life insurance. HUD determined that although it would have been helpful, detailed industry expertise was not necessary to HUD’s review and understanding of MPP’s potential benefits to borrowers and its related costs. A Treasury attorney with expertise in life insurance provided basic information about life insurance products to HUD. However, according to HUD officials, HUD determined that providing information on MPP to Treasury was not necessary as it had obtained sufficient information and analysis to complete its work. In its written response to us, Treasury said: “Since HUD has the statutory responsibility to rule on Fannie Mae’s request to undertake the MPP, and since HUD did not ask for the Treasury’s assistance in making its determination regarding the MPP, the Treasury did not seek to obtain additional information from Fannie Mae.” HUD’s new mortgage program review authority states that the Secretary can disapprove a new mortgage program if he finds that the program is not in the public interest. HUD did not include tax revenue losses in its analysis for the public interest determination. In commenting on a draft of this report, HUD stated its belief that tax issues were within the scope of the MPP review but that in making its public interest determination, HUD would find it difficult to conclude that a practice that is permissible under current tax law was nevertheless against the public interest. Consequently, in its legal analysis, HUD took the position that as long as the MPP program is permissible under the current laws, MPP should not be regarded as against the public interest solely on the basis of a potential adverse impact on federal revenues or the concomitant favorable impact on Fannie Mae’s tax position. “The Treasury has long been concerned about the revenue loss from the favorable tax treatment of cash value life insurance with business policyholders or beneficiaries, and the MPP highlighted these concerns. However, this tax policy concern was not limited to the MPP. In August, Congress passed and the President signed a tax bill that dealt with some of the principal tax policy concerns associated with the MPP.” Nonmortgage investments constituted 10 to 15 percent of on-balance sheet assets at the housing enterprises at June 30, 1997, and most of these investments are short term (i.e., maturities of less than 5 years). Freddie Mac, however, created an investment fund in 1997 authorized to contain up to $10 billion in nonmortgage investments with maturities of over 5 years. Farmer Mac embarked on a debt issuance strategy in 1997 in which the debt largely finances nonagricultural-mortgage investments; such investments grew during the first half of 1997 to about 66 percent of Farmer Mac’s assets. The housing enterprises stated that they hold nonmortgage investments primarily for cash management purposes and to employ capital not currently needed to fund mortgages. Farmer Mac officials stated that Farmer Mac makes nonagricultural-mortgage investments primarily to invest funds from debt issuance that exceed purchases of agricultural mortgages. Nonmortgage investments constituted about 15 percent of on-balance sheet assets at Fannie Mae and 9 percent at Freddie Mac as of year-end 1996. Table 1 shows selected statistics on mortgage assets and stockholders’ equity (i.e., capital) to provide further perspective. For example, nonmortgage investments were about 2.6 percent of Freddie Mac’s and about 6.3 percent of Fannie Mae’s total mortgage servicing portfolio. Nonmortgage investments were more than double Freddie Mac’s capital and more than four times Fannie Mae’s capital. At Farmer Mac, nonagricultural-mortgage investments were about one-fourth of on-balance sheet assets and over three times capital. As shown in table 1, over 65 percent of Freddie Mac’s nonmortgage investments and over 40 percent of Fannie Mae’s were short-term investments in cash, cash equivalents, term federal funds, and eurodollar deposits. Freddie Mac’s and Fannie Mae’s 1996 annual reports also showed overall nonmortgage investments by contractual maturity. About 78 percent of Freddie Mac’s nonmortgage investments had maturities under 1 year, and about 93 percent had maturities under 5 years. The corresponding figures for Fannie Mae were 68 and 75 percent. According to housing enterprise officials, all of their nonmortgage investments were investment-grade. According to the data provided on sales of holdings, neither housing enterprise appears to have actively engaged in frequent selling of its nonmortgage investments. Between the end of 1996 and the end of the second quarter of 1997 (June 30, 1997), the two housing enterprises’ total assets grew (see table 2). Freddie Mac’s assets grew about 5.8 percent, and Fannie Mae’s assets grew about 4.3 percent. Both enterprises’ nonmortgage investments remained relatively stable at about 10 percent of assets for Freddie Mac and at about 15 percent of assets for Fannie Mae. Farmer Mac’s assets more than doubled from $603 million at year-end 1996 to about $1.4 billion at June 30, 1997; its nonagricultural-mortgage investments grew about sixfold—from $155 million to $931 million—and accounted for virtually the entire increase in total assets. At June 30, 1997, these investments totaled about 66 percent of its total on-balance sheet assets. The housing enterprises undertake nonmortgage long-term investments, and Farmer Mac undertakes nonagricultural-mortgage long-term investments. These longer term investments (i.e., more than 5 years) include fixed-rate debt and variable-rate asset-backed securities (ABS).The three enterprises fund these investments by issuing debt and undertaking different strategies, which are incorporated in their investment policies, to limit interest rate risks. Generally, the housing enterprises (1) match fund their fixed-rate nonmortgage investments—they issue debt of the same maturity as the investment; and (2) fund their variable-rate ABS with either short-term debt with the maturity of that debt matching the reset provision (i.e., the time period between the dates when the interest rate adjusts) in the ABS or with variable-rate debt. Enterprise officials told us that to the extent interest rate risks still exist after they use the above-mentioned practices, they use hedging strategies to lessen or eliminate such risks. Freddie Mac officials told us that the primary purposes for holding nonmortgage investments with maturities of under 5 years is for cash management and to meet future anticipated demands for funding residential mortgages. About 7 percent of Freddie Mac’s nonmortgage investments, as of December 31, 1996, had stated maturities exceeding 5 years. However, according to Freddie Mac officials, these investments with longer stated maturities included asset-backed securities that are expected to be paid off, and thereby terminate, prior to their stated maturity dates. In March 1997, Freddie Mac created a nonmortgage investment fund to hold securities with maturities exceeding 5 years to be generally funded by matched maturity noncallable debt. Freddie Mac officials told us that the primary purpose of this new fund, which is authorized to contain up to $10 billion, is to meet future unanticipated demands for funding residential mortgages. Freddie Mac officials told us that the amount of its other nonmortgage investment funds, which generally have maturities under 5 years, would decline. In addition, they also made the following five points about the longer maturity investments in the newly created fund: Freddie Mac would not likely sell these longer maturity nonmortgage securities, because the fund is meant to provide a source for funding those mortgages whose demand is unanticipated. If unanticipated demands for funding mortgages did occur, capital to help support mortgage purchases could be made available by selling the nonmortgage assets, which would be quicker than raising additional capital. Longer maturity nonmortgage investments do not exhibit the prepayment risks (i.e., the risk that borrowers would pay off their mortgages early, thus terminating payment streams) associated with mortgages. Match funding these investments (i.e., issuing debt with the same maturity of the investment) would allow Freddie Mac to access the noncallable bond market without generating interest rate risk. The longer-term nonmortgage investment portfolio would help stabilize income when necessary to counteract adverse earnings’ impact from other forces. Fannie Mae officials told us that the primary purposes for holding nonmortgage investments are for cash management, as an investment vehicle to employ capital not currently needed to fund mortgages that is intrinsically appropriate for a financial corporation of its size, and to maintain a capital cushion in excess of minimum capital requirements. They told us that such a capital cushion enables them to respond to capital markets and fund residential mortgages. Fannie Mae officials told us that nonmortgage investments with maturities exceeding 5 years are a relatively small portion of its total business. They told us that most of these securities are asset-backed securities with variable interest rates and that the variable rate characteristic reduces the interest rate risk associated with fixed-rate long-term bonds and, thus, is important to its overall safety and soundness. In February 1997 Farmer Mac’s board changed its investment policies in order to increase Farmer Mac’s presence in the capital markets, particularly the debt markets, to help attract investors to its securities and thereby reduce its borrowing and securitization costs. The board and management believe that increasing Farmer Mac’s presence in the debt markets will improve the pricing of its agricultural mortgage-backed securities and thereby enhance the attractiveness of the products it offers through its programs for the benefit of agricultural lenders and borrowers. Farmer Mac officials said that although the ultimate objective of Farmer Mac’s increased debt issuance strategy is to invest the proceeds in loans qualifying for inclusion in its securitization and guarantee programs, during the initial period in which Farmer Mac is increasing its debt issuances it will be investing those proceeds in interest-earning nonagricultural-mortgage investment assets. In commenting on a draft of this report, Farmer Mac proposed that 2 to 3 years could serve as a reasonable time frame within which the anticipated increased market interest in its AMBS will occur. FCA and OSMO officials said that Farmer Mac’s rationale for its debt issuance strategy for enhancing the secondary market in AMBS is plausible at this point in time. However, FCA and OSMO officials noted that the extensive nonagricultural-mortgage asset holdings are supposed to be temporary until Farmer Mac’s debt and AMBS costs decline to levels comparable to those for the housing enterprises. Should Farmer Mac’s strategy prove unsuccessful, then FCA and OSMO may revisit the appropriateness of the existing Farmer Mac nonagricultural-mortgage investment portfolio policy and practices. In the interim, FCA, through OSMO, is monitoring the Farmer Mac strategy. FCA and OSMO officials said they have set no time frame for assessing the success of the debt issuance strategy. Enterprise officers and board members have incentives to increase shareholder value, just as the officers and board members of private corporations do. However, unlike private corporations, the enterprises also have public missions stated in their charters. Thus, these enterprise incentives can create tensions between increasing shareholder value and fulfilling the public mission. In addition, the enterprises have opportunities to generate arbitrage profits that can increase shareholder value and that are not available to private corporations. Financial analysts generally define arbitrage as profiting from differences in price when the same security is traded on two or more markets. However, arbitrage can also arise if securities have different yields by virtue of differences in government-provided benefits between those securities. We are using this latter definition of arbitrage in considering enterprise nonmortgage investments. Under this definition, at least some enterprise nonmortgage investments generate arbitrage profits. In addition to generating arbitrage profits, nonmortgage investments can contribute to achieving the enterprises’ missions, although shorter maturity nonmortgage investments more clearly relate to mission than do longer maturity nonmortgage investments. Because the enterprises can generate arbitrage profits and because of the tension between shareholder interests and mission achievement, it is important for the mission regulators, HUD and FCA, to ensure that the missions of these enterprises as stated in their respective charter acts are accomplished. According to enterprise officials, the competitiveness of today’s marketplace literally demands that the enterprises recruit and maintain the caliber of executive officers and board members that will help ensure that their corporations remain among the top-performing organizations. Such action includes the construction of compensation packages that will attract top performers and that contain incentives that will promote the achievement of corporate objectives in addition to satisfying shareholder interests. To ensure that they are in line with current trends, the enterprises have used consulting firms to review and compare the pay structure of their officers and board members with the pay structure of comparable positions in similar private sector financial institutions and other enterprises. Our review of published literature and other information on executive and board compensation the enterprises and OFHEO provided us suggests that in today’s world, more companies are including stock-based compensation for their directors and officers to help create an economic alignment of director and shareholder interests. Like their competitors, the enterprises award stock-based compensation to their board members and senior officers with the intention of helping them to focus on the long-term success of their corporations. In establishing statutory authority, Congress set the tone for the governance structure of all three enterprises—Freddie Mac, Fannie Mae, and Farmer Mac. Each of these shareholder-owned corporations, which also have a public mission, is governed by a board consisting of shareholder-elected directors and appointed directors. Statutory authority provides that the total number of directors elected by shareholders include 13 at Freddie Mac, 13 at Fannie Mae, and 10 at Farmer Mac; each of the enterprises must have 5 directors appointed by the president. According to enterprise officials, the directors have the same or similar duties and obligations as directors of other private corporations, including fiduciary responsibilities to shareholders and the establishment of general operation policies that govern the companies. All directors, whether elected or appointed, share the same duties and obligations, which are primarily carried out through participation in and preparation for board and committee meetings. All directors of the housing enterprises serve 1-year terms unless reelected or reappointed. Appointed directors of Farmer Mac serve at the pleasure of the president, the elected directors serve 1-year terms. In keeping with statutory requirements, the housing enterprises’ compensation structure is built upon a philosophy of comparability (i.e., compensation is reasonable and comparable to that of similar businesses) and pay for performance, which includes the achievement of individual as well as corporate-level objectives. All three enterprises have committees that set policy and make recommendations concerning compensation. Annual evaluations allow for salary adjustments based on merit performance and the need to maintain market competitiveness. Board members of all three enterprises receive cash compensation in the form of an annual retainer and stipulated fees for attending board and committee meetings. In addition to the cash, board members receive long-term compensation in the form of stock and stock options (see table 3). Similarly, in addition to their base salaries, senior managers of the enterprises receive bonuses (which are to recognize their individual contributions to the success of corporate goals), as well as stock and stock options designed to ensure sustained corporate success. (See app. II for more detailed information on the enterprises’ compensation structures.) Private corporations without government sponsorship provide incentives to their senior management and board members to take actions that will increase profits and shareholder value. The enterprises have instituted compensation packages that conform closely to those of private corporations, including financial institutions, with which they compete for individuals with specific skills. These compensation packages include stock-based compensation strategies that have the intent of aligning the economic interests of managers and directors with shareholder interests.The compensation packages that board members at the enterprises receive do not differ according to whether the board member is shareholder elected, presidentially appointed, or chosen by another method. The enterprises told us that the orientation and training activities they provide new board members do not differ according to how the board member is selected. The enterprises also told us that board members are instructed to advocate corporate activities that enhance shareholder value while supporting the enterprise’s charter purposes. From our analysis, it appears to us that compensation incentives available to enterprise senior management and board members, including stock-based compensation, reinforce the tension between increasing shareholder value and achieving mission. At a minimum, stock-based compensation can affect how broadly board members and senior managers interpret whether the corporate activities they advocate contribute to fulfillment of mission. Freddie Mac officials disagreed with our view that a tension exists between increasing shareholder value and achieving mission. They told us that the two goals were compatible and codependent. They stated that Congress wanted a private company to fill a public purpose. With this role, they noted that if one were to ignore the entity of the shareholder, the public mission cannot be fulfilled. We note that short of ignoring the interests of the shareholder, a tension exists. It is this tension that hightlights the importance of mission oversight. Without effective mission oversight, the incentives to use the benefits of government sponsorship to increase shareholder value could, over time, erode the public mission. If this were to occur, long term nonmortgage investments could become an increasing part of the housing enterprises’ portfolios and Farmer Mac’s temporary approach could become a permanent strategy even it it does not enhance Farmer Mac’s ability to purchase agricultural mortgages. In a previous report about the housing enterprises, we concluded that the greatest benefit to the enterprises from government sponsorship flows from the market perception of an implied guarantee on enterprise obligations, because this perception generates a funding advantage—a reduction in yields on enterprise debt. In that report, we indicated that the funding advantage could be in the range of 30 to 106 basis points. This range took into account the long-term nature of residential mortgage investments, and it assumed that the housing enterprises would receive a credit rating between a high of AAA and a low of A if their government sponsorship were eliminated. Findings from our analysis of housing enterprise financial data are consistent with this estimated funding advantage range and with a credit rating between AA and A. Appendix I contains a more detailed discussion of our analysis. In the previous report about the housing enterprises, we indicated that government sponsorship of the housing enterprises lowered interest rates on single-family, fixed-rate, conforming mortgages. Although the benefits of government sponsorship reduce certain mortgage interest rates, there is no similar effect on the yields of nonmortgage investments, because the enterprises are not a significant source of funding outside the residential mortgage market. Thus, there is an additional incentive for the enterprises to issue debt, whose yield is lower by virtue of government sponsorship, to invest in nonmortgage investments, whose market yields will be relatively higher because they are not affected by government sponsorship. Farmer Mac is a government-sponsored enterprise that also benefits from the market perception of an implied guarantee on enterprise obligations. It is, however, a much smaller and less established corporation than either of the housing enterprises. As a result, it is difficult to estimate Farmer Mac’s funding advantage. For example, we do not know whether it could remain in business without government sponsorship or what its credit rating would be if it became a going concern as a private corporation without government sponsorship. If its credit rating without government sponsorship would be less than A, its funding advantage from government sponsorship could be greater than the advantage for the housing enterprises. However, Farmer Mac’s securities may be currently perceived by market participants as more risky than housing enterprise securities. Farmer Mac documents provided to us in July 1997 indicated that yields on Farmer Mac debt had been between 1 and 10 basis points higher than yields on equivalent housing enterprise debt prior to Farmer Mac’s new debt issuance strategy, and these yield differences had not yet been eliminated by Farmer Mac’s debt issuance strategy. Of the specific nonmortgage investments made by the enterprises, public information is available on one investment that generated arbitrage profits; this investment was in Phillip Morris bonds purchased by Freddie Mac. The Phillip Morris bonds, which had an A rating, were purchased by Freddie Mac and were funded by Freddie Mac bonds with the same maturity. The yield difference was slightly over 60 basis points. Freddie Mac officials told us that its nonmortgage investment fund holding securities with maturities exceeding 5 years is authorized to contain up to $10 billion. Applying, as an example, an interest rate differential of 60 basis points, a $10 billion fund could generate as much as $60 million annually in arbitrage profits. If a similar 60-basis-point differential were applied to Farmer Mac nonagricultural-mortgage investments with maturities exceeding 5 years, arbitrage profits would represent about $3.2 million. We did not make an overall estimate of arbitrage profits, in part because of difficulties in estimating the funding advantage. For the housing enterprises, we have good estimates for the funding advantage on longer term investments in fixed-rate debt that are match funded. These enterprises hold nonmortgage investments in variable-rate ABS with stated maturities of over 5 years. The enterprises told us that many of these securities have expected maturities of less than 5 years due to borrower prepayments, and we do not have good estimates for the funding advantage on these investments. We also do not have good estimates for the funding advantage on short-term investments. From our review of variable-rate ABS and short-term investments made by the housing enterprises, however, it appears that the funding advantage associated with government sponsorship is lower for these investments than for longer term, fixed-rate nonmortgage investments. The public purposes of the housing enterprises, as specified in their respective federal charters, include providing stability in the secondary market for residential mortgages and responding appropriately to the private capital market. Enterprise purchases of residential mortgages directly contribute to mission achievement. As a general matter, the housing enterprises said they also take actions they think position them to serve their respective markets under different financial market conditions as well as different conditions affecting the residential mortgage market. The housing enterprises state that their nonmortgage investment holdings allow them to respond appropriately to capital markets and fund residential mortgages during different market conditions. They also emphasize that the yields on their nonmortgage investments are lower than the yields on their mortgage investments. Our analysis of the housing enterprises’ nonmortgage investments indicated that overall, the yields on such investments are lower than on their mortgage investments. For example, in 1996 Freddie Mac’s average interest rate on cash and nonmortgage investments was 5.55 percent, and on mortgages it was 7.46 percent. The respective interest rates for Fannie Mae in 1996 were 5.68 percent on nonmortgage investments and cash equivalents and 7.71 percent on mortgages. The general preponderance of short-term investments in the enterprises’ nonmortgage investments accounts for the lower overall yield on these investments compared to mortgage investments. Our analysis of these short-term nonmortgage investments, such as term federal funds, indicates that they have a clear relationship to mission in enhancing liquidity, thereby allowing the enterprises to fund residential mortgages during different market conditions. In addition, even though they might also generate arbitrage profits, they are not the primary vehicle through which the housing enterprises would attempt to generate arbitrage profits. Likewise, since the yields from these investments are low relative to long-term nonmortgage investments, it is not likely that the volume of short-term nonmortgage investments would be substantially affected by the tension between increasing shareholder value and achieving mission, because these investments have lower yields than mortgage investments. Freddie Mac officials indicated that nonmortgage investments are an integral tool for carrying out its housing finance mission and are held for three principal reasons: (1) cash management purposes; (2) as an investment vehicle that could make capital available (i.e., to employ capital) to help fund future anticipated demand to fund residential mortgages; and (3) as an investment vehicle to employ capital for future unexpected demand to fund residential mortgages. Freddie Mac created a fund in March 1997, which it calls its core fund, to invest in securities with maturities exceeding 5 years to be funded by matched maturity noncallable debt. The main stated purpose for the core fund is to have capital employed in case it becomes necessary to fund unexpected mortgage demand. Although Freddie Mac does not expect to liquidate core fund investments, Freddie Mac officials told us that liquidation could occur to fund purchases of residential mortgages if a decline in interest rates triggered a substantial increase in mortgage prepayments or if a major mortgage dealer or investor failed. The officials also said that raising capital to fund unexpected mortgage demand could take up to 4 months, and therefore it was important to have capital employed in investments that could quickly be liquidated in case such funds became necessary. Our analysis focused on alternative mechanisms available to Freddie Mac for funding unexpected mortgage demand. We asked Freddie Mac officials if they were able to supply the necessary liquidity in 1993, when declining mortgage interest rates caused the highest level of mortgage prepayments in history, by using financing techniques that did not rely on liquidation of long-term investments. The officials told us that the enterprises were able to serve the market by funding purchases of residential mortgages, but this particular experience was not a guarantee for the future. It is worth noting that mortgage prepayments reduce the level of the enterprises’ outstanding MBS held by investors; therefore, investment funds to fund newly refinanced mortgages are made available from investors who purchase housing enterprise MBS. Thus, in this situation MBS issuance could provide necessary liquidity without reliance on liquidation of core fund investments. We agree that the potential failure of a major mortgage dealer or investor could bring about a need for additional liquidity in the mortgage market. However, Freddie Mac has a number of vehicles to provide liquidity, such as use of proceeds from maturing short-term nonmortgage investments to purchase residential mortgages, which in turn can be funded by issuance of MBS. Freddie Mac could also issue MBS backed by on-balance sheet holdings of residential mortgages, thereby reducing required capital to support its mortgage servicing portfolio. Such an action would make a capital cushion available to support funding of the unexpected mortgage demand, because the enterprises do not have to hold as much capital to finance off-balance sheet compared to on-balance sheet mortgage assets. Fannie Mae officials indicated that nonmortgage investments are held for three principal reasons: (1) cash management purposes, (2) as an investment vehicle to employ capital that is intrinsically appropriate for a financial corporation of its size, and (3) to maintain a capital cushion in excess of minimum capital requirements. Fannie Mae’s nonmortgage investments with maturities exceeding 5 years are mostly asset-backed securities (ABS) with variable interest rates. The market value of the longer term ABS does not fluctuate as much as the market value of long-term fixed-rate securities, because most of the ABS have variable interest rates. Therefore, at times Fannie Mae has sold ABS to finance mortgage purchases. This activity is consistent with how Fannie Mae employs its short-term nonmortgage investments. In addition, according to our review of variable-rate ABS investments by all three enterprises, it appears that the funding advantage associated with government sponsorship is lower for these instruments compared to long-term, fixed-rate nonmortgage investments. Nonetheless, some arbitrage profits are generated from these investments. Therefore, the ABS investments appear to have characteristics that differ somewhat from other nonmortgage investments in two dimensions. First, they appear to be somewhat related to mission, because they are more liquid than fixed-rate long-term investments but less liquid than short-term nonmortgage investments. However, fluctuations in the market value of ABS, in relation to short-term nonmortgage investments, can reduce their effectiveness in providing liquidity. Second, they appear to generate arbitrage profits, although at a lower level than other fixed-rate long-term nonmortgage investments. In addition to the contribution to mission goals and the generation of arbitrage profits already presented, there is an additional potential mission-related rationale for holding nonmortgage investments where the investment merely provides a potential source of resources that can be used to fund targeted housing mortgage programs. Such a rationale appears to be consistent with one offered by HUD in its analysis of the housing enterprises’ retained mortgage portfolios. HUD’s report on privatization concluded: “Full privatization would reduce the GSEs’ portfolio operations. This would not have a major impact on the mortgage market because the MBS market is now well-developed and is an effective mechanism for allocating interest rate risk.” HUD also concluded, however: “Most GSE earnings come from their portfolio operations. Without the cushion of a highly profitable portfolio, the fully privatized GSEs would reduce their funding of the more risky affordable loans, unless these loans started carrying much higher interest rates.” Farmer Mac’s first year with positive net income was 1996. Net income has grown during the first two quarters of 1997 as Farmer Mac initiated its debt issuance strategy. Currently, over half of its on-balance sheet asset holdings are in investments other than agricultural mortgages. Government sponsorship of Farmer Mac lowers its debt costs, generating arbitrage profits from such investments. In its semiannual report to the House and Senate Agriculture Committees transmitted in April 1997, FCA notes that Farmer Mac can operate at a profit even if its core business does not expand, as long as it can borrow funds at lower rates than it can earn on investments. Farmer Mac’s strategy appears to be unique, not at all similar to the strategies followed by the housing enterprises over the course of their development, which makes it more difficult to determine whether the debt issuance policy will help Farmer Mac achieve its mission. It appeared to us that Farmer Mac’s debt issuance strategy would logically operate by allowing Farmer Mac to profitably price agricultural mortgage purchases so that originators would expect higher returns by selling rather than retaining mortgages in their own portfolios. For example, if the debt issuance strategy lowered funding costs for Farmer Mac on its AMBS, Farmer Mac might be able to pay mortgage originators higher prices for agricultural mortgages and remain profitable. Farmer Mac officials also told us that their investments in agricultural mortgages have higher returns than those for its nonagricultural-mortgage investments. Based on this observation by the Farmer Mac officials, it appeared to us that Farmer Mac may be able to pay mortgage originators higher prices than it currently does for agricultural mortgages and remain profitable in this mission-related segment of its business. We asked the Farmer Mac officials why Farmer Mac does not, therefore, price its agricultural mortgage purchases more favorably for mortgage originators to help this mission-related business expand. Farmer Mac officials stressed other strategies it is pursuing, such as outreach efforts with agricultural mortgage originators. We are uncertain as to whether Farmer Mac’s debt issuance strategy will contribute to mission achievement, because Farmer Mac’s debt issuance strategy intends to lower funding costs to purchase agricultural mortgages and issue AMBS. Farmer Mac might become better able to spend funds to recruit mortgage originators and pay mortgage originators higher prices for agricultural mortgages while remaining profitable in its mission-related business if its AMBS costs declined. However, Farmer Mac already appears to have the ability to spend more funds for such purposes than it does currently. Our analysis indicates that in establishing GSEs, Congress has followed the rationale of focusing GSE activity on specific sectors of the economy. Freddie Mac, Fannie Mae, and Farmer Mac have federal charters that specify the purposes of each enterprise and provide the enterprises with broad authorities as private corporations to manage their day-to-day business operations, including their investment policies. The enterprises’ charters also direct them to fulfill specific public missions. The enterprises have mission regulators with general regulatory authorities that are charged with ensuring that the missions of these enterprises are being fulfilled. We agree with a recent HUD evaluation that it could use its general regulatory authority to potentially limit nonmortgage investments. HUD has begun a rulemaking effort intended to develop regulations governing nonmortgage investments by the housing enterprises to help ensure that such investments are related in some fashion to mission achievement. We agree that this effort can help HUD develop criteria to determine the extent to which various nonmortgage investments are mission related. Although FCA could use its general regulatory authority over nonagricultural-mortgage investments by Farmer Mac to help ensure that such investments are related in some fashion to Farmer Mac’s mission achievement, it has not established a procedure for doing so. To date, neither HUD nor FCA has developed specific criteria to determine whether enterprise nonmortgage investments are consistent with mission achievement. The enterprises have investment policies that specify permissible credit ratings, maturities, and concentration limits and describe the relationship of investments to earnings and to achievement of mission. The enterprises have incentives as private corporations to increase shareholder value; these incentives create a tension with achievement of the missions stated in the federal charters of the enterprises. It is this tension that hightlights the importance of mission oversight. Without effective mission oversight, the incentives to use the benefits of government sponsorship to increase shareholder value could, over time, erode the public mission. If this were to occur, long term nonmortgage investments could become an increasing part of the housing enterprises’ portfolios and Farmer Mac’s temporary approach could become a permanent strategy even if it does not enhance Farmer Mac’s ability to purchase agricultural mortgages. Government sponsorship of the enterprises lowers their debt costs, and they can therefore generate arbitrage profits (i.e., profits resulting from their funding advantage) by investing in nonmortgage assets. The various nonmortgage investments appear to fall along a continuum representing the degree to which they relate to the housing enterprises’ missions. On one end are short-term nonmortgage investments, such as term federal funds, which facilitate liquidity although they might also generate arbitrage profits. On the other end are longer term investments that generate arbitrage profits, but they are less clearly related to the enterprises’ missions in facilitating liquidity in the secondary market, because fluctuations in their market value reduce their usefulness in responding to changes in capital and mortgage products. At this time, it is not clear whether Farmer Mac’s debt issuance strategy will eventually help it expand purchases of agricultural mortgages in fulfillment of its mission. Given the uncertainty of when, or if, the Farmer Mac strategy will be successful, FCA has the responsibility to monitor Farmer Mac’s strategy to help ensure that the nonagricultural-mortgage investments, which are a primary source of its earnings, are related in some fashion to Farmer Mac’s mission achievement. Farmer Mac’s strategy appears to be unique, not at all similar to the strategies followed by the housing enterprises over the course of their development. In presenting this strategy, Farmer Mac officials told us that the strategy’s contribution to mission achievement should develop over a reasonable period of time. To provide more focused oversight of the housing enterprises’ nonmortgage investments, we recommend that the Secretary of HUD promptly implement HUD’s stated intention to develop criteria through appropriate rulemaking processes to help ensure that the housing enterprises’ nonmortgage investments are consistent with the purposes expressed in their charter acts. We also recommend that the Chairman of the FCA Board direct OSMO to develop the requisite criteria and report periodically, such as through its semiannual reports to the House and Senate Agriculture Committees, on the relationship of Farmer Mac’s debt issuance strategy to the achievement of Farmer Mac’s mission. To help ensure that the enterprises’ nonmortgage investments appropriately support their public missions, the appropriate congressional committees may wish to monitor HUD and FCA actions to establish criteria and procedures for carrying out their general regulatory authorities. Such oversight is important to help ensure that corporate incentives to increase shareholder value do not erode the enterprises’ public mission. If adequate progress is not made in a timely way, Congress may wish to consider providing further guidance to the regulatory agencies. We received comments on a draft of this report from each of the three enterprises, HUD, OFHEO, FCA, and Treasury (see apps. III through IX). Appendixes III, IV, V, VI, and VIII also contain additional responses to specific comments by Freddie Mac, Fannie Mae, Farmer Mac, HUD, and FCA. Farmer Mac, OFHEO, FCA, and Treasury provided technical comments that were incorporated in the report where appropriate. The three enterprises agreed with our finding that the enterprises have broad investment authority and noted our acknowledgement that the safety and soundness regulators have determined that the enterprises’ nonmortgage investment portfolios do not raise safety and soundness concerns. However, the enterprises raised a number of concerns and disagreed with some of our major findings pertaining to the relationship of nonmortgage investments to mission achievement, arbitrage profits, and the tension between increasing shareholder value and achieving mission. Based on Freddie Mac’s disagreement with our findings, it did not concur with our recommendation to HUD. Although Farmer Mac disagreed with some of these findings, it agreed with our recommendation to FCA. HUD, OFHEO, FCA, and Treasury also provided comments, some of which focused on the three major issues raised by the enterprises. Concerning the relationship of nonmortgage investments to mission achievement, Freddie Mac said (see app. III) that our draft report made the erroneous assertion that long-term nonmortgage investments are fairly illiquid, and this assertion provided the basis for our questioning the role of nonmortgage investments in mission achievement. As shown in appendix IV, Fannie Mae raised the concern that we had a “brief and somewhat unclear presentation of how Fannie Mae views the role of nonmortgage investments in capital management.” In response to Freddie Mac’s comments, we clarified our discussion of the role of the various nonmortgage investments in facilitating liquidity in the secondary market for residential mortgages. Many of the housing enterprises’ intermediate and longer term nonmortgage investments have broad and deep markets that make them readily marketable or liquid in the sense that they can be sold without substantial loss in market value. However, longer term investments are less liquid than shorter term investments in the sense that their market values are subject to larger fluctuations with changes in interest rates. These fluctuations can reduce their usefulness in responding to changes in capital and mortgage markets and facilitating liquidity in the residential mortgage market at a particular point in time, because their market values can be less than their original values when liquidation may be warranted. Therefore, we did not change our conclusion that the relationship between longer term nonmortgage investments and mission achievement is less clear than that for short-term nonmortgage investments. In response to Fannie Mae’s comments, we supplemented our discussion of Fannie Mae’s primary purposes for holding nonmortgage investments to include maintaining a capital cushion in excess of minimum capital requirements. We note, however, that this purpose overlaps with the other purposes we cite in our report—cash management and providing a capital cushion to respond to capital markets and fund residential mortgages (thus facilitating liquidity). Beyond these purposes, Fannie Mae appears to emphasize the role of nonmortgage investments in Fannie Mae’s earnings and capital management as well as attention to safety and soundness concerns. This emphasis is consistent with Fannie Mae’s purposes for its investment portfolio as stated in its annual report, which are to contribute to corporate profitability, serve as a source of liquidity, and provide a return on the excess capital of the corporation. This argument by Fannie Mae, however, does not demonstrate a relationship between nonmortgage investments and mission achievement beyond the relationship already established in our report. Concerning the relationship of nonmortgage investments to mission achievement and regulatory oversight, HUD said (see app. VI) that our report fairly characterizes the issues, constraints, and ambiguities involved in overseeing the housing enterprises’ nonmortgage investment activities. HUD agreed with, and said it has begun to implement, our recommendation to the Secretary of HUD. OFHEO (see app. VII) stated that it is appropriate for Congress to monitor nonmortgage investments at the enterprises and that Congress may wish to provide more specific guidance to the regulatory agencies regarding the appropriate range of investment activities. Farmer Mac (see app. V) agreed with our finding that FCA has the responsibility to monitor Farmer Mac’s strategy to help ensure that the nonagricultural-mortgage investments are related in some fashion to Farmer Mac’s mission achievement and our recommendation to the Chairman of the FCA Board containing an FCA reporting requirement. However, Farmer Mac took issue with our finding that its debt issuance strategy and related investment activities may not be mission related. In particular, Farmer Mac stated that we implied that its debt issuance strategy will not work because we stated that the strategy is unique. Farmer Mac also stated that this strategy, by lowering funding costs, can make funds available to recruit new mortgage originators. In addition, Farmer Mac provided a detailed description of its debt issuance policy, how it is expected to lower funding costs, and how the corporation sees the policy as linked to achievement of its mission. In response to Farmer Mac’s comments, we made revisions to clarify Farmer Mac’s position on its debt issuance strategy to include the potential for making more funds available to recruit new mortgage originators. We also added clarifying language to indicate that our characterization of Farmer Mac’s debt issuance strategy as unique was included as one of several reasons why it is hard to determine whether the debt issuance policy will be effective in helping Farmer Mac achieve its mission. FCA said (see app. VIII) that our draft report contains a fair representation of Farmer Mac’s investment activity and FCA’s views with respect to that activity. FCA appears to show some support for our recommendation to the Chairman of the FCA Board to report on the relationship of Farmer Mac’s debt issuance strategy to the achievement of Farmer Mac’s mission. However, it is not clear whether this willingness to report is limited to reporting on safety and soundness matters or includes issues of mission regulation. In addition, FCA said that it does not currently have any activities under way that are expected to culminate in regulation of the investment portfolio. In response to FCA’s comments, we revised our recommendation to the Chairman of the FCA Board to state that requisite criteria should be developed to assess and report on the relationship of Farmer Mac’s debt issuance strategy to the achievement of Farmer Mac’s mission. Treasury said (see app. IX) that we identified some of the important policy issues raised by the investment practices of the enterprises. Treasury stated that it agrees with our recommendation that the enterprises’ mission regulators should use their general regulatory authority to limit the enterprises’ nonmortgage investment activity. We note, however, that our recommendation calls for the mission regulators to develop criteria to help ensure that nonmortgage investments are consistent with the purposes expressed in the enterprises’ charter acts. The second issue relates to our position that the purchase of nonmortgage investments generates arbitrage profits. In commenting on our draft report, Freddie Mac took issue with our definition of arbitrage and asserted that we created a new definition of arbitrage to be responsive to the requester’s instructions that we report on the extent to which the enterprises have undertaken nonmortgage investments for arbitrage profits. Freddie Mac also asserted that under our definition, any profitable investment Freddie Mac makes would be considered arbitrage, and therefore we have a circular argument. Freddie Mac states the general definition of arbitrage used by financial analysts that is also stated in our report. This general definition, however, does not consider differences in government-provided benefits among debt issuers. Therefore, we adopted a definition of arbitrage that is similar to the definition of an arbitrage bond defined in a section of the U.S. tax code. The definition is in reference to state and local governments whose funding costs are lowered by virtue of the federal income tax exemption for interest on state and local bonds; this definition explicitly accounts for differences in government-provided benefits. Freddie Mac’s assertion that anything profitable would be arbitrage according to our definition is not correct. We note that in an integrated national financial market there would be little if any opportunity for profit from borrowing and lending for the same time period with no risk if no funding advantage were present. Under our definition of arbitrage, arbitrage profit is the amount of profit on nonmortgage investments associated with the funding advantage from government sponsorship and not the profit resulting from either risk-taking or good business judgement. Treasury agreed with our conclusion that the enterprises’ long term nonmortgage investments generate arbitrage profits and that some of the enterprises’ short-term investments may also generate arbitrage profits. All three enterprises took issue with our conclusion that a tension exists between increasing shareholder value and achieving mission. Freddie Mac said our draft report suggests that there is an inherent conflict between private ownership and Freddie Mac’s public mission, which is at odds with legislative intent and Freddie Mac’s demonstrated record of achievement. Fannie Mae said that although we provided a good review of the policies underpinning its executive compensation policy, our “tension” construct implies a conflict that is theoretical at best. Fannie Mae offered a construct where senior management is required to achieve multiple objectives. Fannie Mae adds: “What is unique to the enterprises is that our mission is elevated by the Charter and enforced through oversight, regulation and potential legal sanction. The seriousness of our mission obligations are very clearly understood by managers, Directors and shareholders alike.” Farmer Mac believes that there is a convergence of, rather than a tension between, the interests of Farmer Mac’s shareholders and mission achievement through expanded volume, because of Farmer Mac’s early stage of development. In our report, our finding of a tension between increasing shareholder value and mission fulfillment points to the role of mission regulation in helping to ensure that the purposes of the charter acts are achieved. We recognize that Congress granted the housing enterprises federal charters to direct them to bring private sector operating efficiencies to fulfill a public purpose in the secondary mortgage market. Congress also granted regulatory authorities to OFHEO to help ensure that the housing enterprises operate in a safe and sound manner and to HUD to ensure that the purposes of the respective charter acts are accomplished. Therefore, it appears that Congress intended regulatory oversight to address situations in which the private and public interests may not be aligned. Government sponsorship of the housing enterprises has created a mechanism for the government-provided benefits to be passed through, at least in part, in the form of lower mortgage interest rates. Because this mechanism is not present for nonmortgage investments, at least some of these investments could be more profitable on a risk-adjusted basis than mortgage investments. To the extent that profits from some nonmortgage investments are less clearly related to mission, reasonable questions can be raised about whether government benefits are supporting shareholder interests at the expense of the public mission. Fannie Mae’s view explicitly acknowledges charter restrictions, regulatory oversight, and multiple corporate goals. Therefore, it is difficult to distinguish its view from ours, except that Fannie Mae appears to believe that the mission has been integrated into its corporate culture. Rather than relying solely on corporate culture, however, Congress established HUD as a regulator to ensure that mission objectives are achieved. Farmer Mac’s debt issuance strategy has expanded the volume of nonagricultural-mortgage investments. Although it is clear that these investments are profitable and affect executive compensation, it is not yet clear whether they contribute to mission achievement. As arranged with your office, unless you publicly announce the contents of this letter report earlier, we plan no further distribution until 14 days after its issue date. At that time, we will send copies to HUD; OFHEO; FCA; Treasury; the enterprises; the Ranking Minority Member of your Committee; the Chairman and Ranking Minority Member of the Subcommittee on Capital Markets, Securities and Government Sponsored Enterprises; and the Chairman and Ranking Minority Member of the Senate Committee on Banking, Housing, and Urban Affairs. We will also make copies available to others on request. Major contributors to this report are listed in appendix X. Please contact me or Bill Shear, Assistant Director, at (202) 512-8678 if you or your staff have any questions. We are defining the term arbitrage to mean using the funding advantage from government sponsorship to raise funds for making nonmortgage investments. In a previous report about the housing enterprises, we concluded that the largest enterprise benefit from government sponsorship flows from the market perception of an implied guarantee on enterprise obligations, because this perception generates a funding advantage—a reduction in yields on enterprise debt. In that report, we indicated that the funding advantage could be in the range of 30 to 106 basis points. This range accounted for the long-term nature of residential mortgage investments, and it assumed that the housing enterprises would receive a credit rating between a high of AAA and a low of A if their government sponsorship were eliminated. Our definition of arbitrage does not require the enterprise to match fund its nonmortgage investments. However, our measurement of the yield differences used to estimate arbitrage profits is based on a comparison of debt securities with similar maturity and risk. Under this method, if the debt securities being compared had different maturities, the yield difference (i.e., yield spread) would reflect both the impact of government sponsorship and the difference in interest rate risk between the debt securities. Alternatively, if the debt securities being compared had similar maturities and risks but differed by virtue of government sponsorship, the yield difference would reflect the impact of government sponsorship. For this reason, matched maturity debt provides the best measure of arbitrage, even though arbitrage is also present when nonmortgage investments are funded by enterprise debt with different maturities and risks. In addition to relying on yield differences between enterprise nonmortgage investments in corporate bonds and matched maturity enterprise debt used to finance such investments as a measure of arbitrage, another possible approach to measuring arbitrage is to use statistical techniques to adjust yield differences between corporate and enterprise debt for differences in maturity and risk. Such an approach was used in a study commissioned by the Department of Housing and Urban Development to evaluate the implications of severing government sponsorship of the housing enterprises on the enterprises’ debt costs. The study included analysis of yield differences between housing enterprise debt and debt issued by different groups of corporations (i.e., the benchmark groups) with A, AA, and AAA credit ratings. The benchmark groups were chosen on the assumption that the housing enterprises would have credit ratings in this range if their federal charters were revoked. The analysis was complex, because bond characteristics differ between bonds issued by the enterprises and other issuers. Ambrose and Warga recognized how difficult their research task was and qualified their results on the basis of the statistical complexities. We relied on the Ambrose and Warga study results for the period 1991 through 1994, in part, in making our estimate that the housing enterprises’ funding advantage from government sponsorship could be in the range of 30 to 106 basis points. Due to the statistical complexities of their analysis, we concluded that their estimates lacked precision. Therefore, we used this broad range for the funding advantage on debt. Recently, the Standard and Poor’s (S&P) credit rating agency rated each housing enterprise for its government risk rating, which is based on the probability that the federal government would be called upon in the event of an enterprise default on its obligations. Each enterprise received an AA minus rating. S&P indicated that the rating for each enterprise still accounted for some benefits, namely liquidity of enterprise obligations, due to government sponsorship. If the enterprises were privatized, S&P said that each would likely have to raise additional equity capital to maintain an AA minus rating. The housing enterprises fund their nonmortgage investments with noncallable debt. Generally, the Ambrose and Warga yield spread estimates were smaller for noncallable than for callable debt. On the basis of the S&P credit ratings and the way the enterprises fund their nonmortgage investments, we have concluded that the Ambrose and Warga estimated yield spreads based on the A and AA credit ratings for issuance of noncallable debt are most relevant for measuring arbitrage profits. For the 1991 through 1994 period, the range of estimated yield spreads on noncallable debt between the AA-rated companies and the housing enterprises was 39 to 46 basis points; when the A-rated companies were used as the benchmark, the range was 65 to 72 basis points. We continue to support the Ambrose and Warga estimates and our estimated range of 30 to 106 basis points for the funding advantage; findings from our analysis of housing enterprise financial data are consistent with these estimates. As indicated in the body of this report, we could not estimate the funding advantage resulting from government sponsorship for Farmer Mac. However, we indicated qualitatively how Farmer Mac’s funding advantage could compare to the funding advantage of the housing enterprises. The enterprises’ management and boards of directors have implemented compensation policies designed to attract and retain individuals from various disciplines who have the talent and motivation needed to accomplish the corporations’ objectives. The enterprises seek to closely link pay with performance and provide compensation that is reasonable and comparable with compensation for employment in other similar businesses involving similar duties and responsibilities. Through committees, the enterprises’ boards of directors develop policies and administer compensation programs that are meant to conform to the congressional mandate. Freddie Mac’s overall compensation package consists of both direct compensation (i.e., cash and stock incentives) and noncash employee benefit programs. A base salary, an annual cash bonus, and long-term stock incentives are included in the direct compensation package. Base salaries are determined primarily by position and individual skills and are targeted to match the median (i.e., 50th percentile) level of the market as determined by data obtained from comparator groups (e.g., companies identified as being in a similar line of business) and market surveys. Annual cash bonuses, which function as short-term incentives, are based on a combination of individual and corporate performance and increase as a percentage of base salary at successively higher levels of responsibility and accountability. Long-term stock incentives are awarded to officers and director-level employees (i.e., employees who report directly to officers or are senior-level technical and professional employees, but not members of Freddie Mac’s Board of Directors). For officers, long-term stock incentives are awarded as a percentage of base salary and increase as a percentage of base salary at successively higher levels of responsibility. The director-level employee is awarded long-term stock incentives as a percentage of the director-level salary grade midpoint. Freddie Mac’s long-term stock incentives include restricted stock, which is awarded to the corporation’s executive officers, and stock options. Examples of noncash employee benefits offered to all regular employees include an individually structured benefits package (i.e., health care, life insurance, etc.); a pension plan; and an employee stock purchase plan. Fannie Mae’s compensation package consists of a base salary, employee benefits, annual incentives, and long-term incentives. The salary is based on individual skills, experience, performance, etc; benefits include such provisions as insurance coverage, vacation pay, sick leave, and retirement. Annual incentives reward employees for reaching specific objectives or completing projects that enhance the corporation’s success for that year, and long-term incentives generally reward executives for shareholder gains and the achievement of specific corporate objectives. Today, Farmer Mac’s salaries and other compensation components are based on surveys of pay structures at other enterprises and other financial institutions; however, this was not always the case. Although its salary compensation policies were generally competitive, Farmer Mac officials told us that other aspects of its compensation were not. In 1995, assisted by a compensation consultant, Farmer Mac recognized the need to revise its compensation policies to emphasize the creation of a greater management equity stake in Farmer Mac’s future. Consequently, the consultant helped Farmer Mac to establish a baseline compensation package for its staff that now includes an annual salary, annual bonus to award current-year contributions to Farmer Mac’s success, and long-term compensation (stocks and options) to ensure that directors and senior managers hold an equity interest in the corporation to provide the incentive to ensure the long-term survival of Farmer Mac. Officers and employees are also provided certain benefits, such as health and life insurance and a pension plan. As of December 31, 1996, Farmer Mac had 21 employees. The proportion of Farmer Mac’s total compensation package representing incentive compensation for the 1995-96 plan year was 26 percent for the Chief Executive Officer and ranged between 13 percent and 19 percent for other senior management personnel. As recommended by a consultant, a portion of incentive compensation ranging from 67 percent to 88 percent represented stock grants and stock option awards. Incentive compensation was linked to the evaluation of each individual’s performance, based on standards that included professional competence, motivation, and effectiveness, as well as the individual’s contribution to the implementation of strategies designed to achieve the objectives set forth in Farmer Mac’s business plan for the 1995-96 plan year. The purpose of Farmer Mac’s stock option plans is to encourage stock ownership by directors, officers, and other key employees to provide an incentive for such individuals to expand and improve the business of the corporation and to assist Farmer Mac in attracting and retaining key personnel. As with the other enterprises, the use of stock options is an attempt to align the long-term interests of employees more closely with those of Farmer Mac’s stockholders by providing employees with the opportunity to acquire an equity interest in Farmer Mac. The following comments represent our response to specific comments made on a draft of this report on January 12, 1998, by the Chairman and Chief Executive Officer of Freddie Mac. 1. Freddie Mac said that we calculate nonmortgage assets as a percentage of on-balance sheet assets and that this exaggerates the size of nonmortgage investments in relation to Freddie Mac’s total mortgage-related activities. We calculate nonmortgage investments as a percentage of (1) on-balance sheet assets; (2) total mortgage servicing portfolio; and (3) capital (i.e., corporate equity). The second measure captures Freddie Mac’s total mortgage-related activities. The second measure is relevant if the sole purpose of the nonmortgage investments is, by definition, to support the total mortgage servicing portfolio. However, we provide the first and third measures because of their relevance in cases where the relationship of nonmortgage investments to mission is not clear. For example, both measures are relevant for comparisons among different corporations competing in the financial services industry. 2. Freddie Mac stated that its debt and MBS markets are not immune from disruption and states the example, “. . . should secondary market investors begin to sell large amounts of long-term mortgage-backed securities, the ‘spreads’ on these securities against Treasury bond benchmarks would widen (. . . perhaps significantly).” Freddie Mac added that it could only make matters worse in this environment by issuing new MBS. Freddie Mac does not specify the economic conditions that would cause investors to sell large amounts of MBS so that the value of MBS relative to Treasury securities would fall, while at the same time there would be such an unexpected surge in mortgage demand that would require liquidation of core fund investments. During the course of our assignment, we asked Freddie Mac to provide analyses indicating the economic conditions that would require sales of nonmortgage investments from the core portfolio to purchase mortgages. With respect to the examples Freddie Mac provided, in our report we analyzed alternative mechanisms available to Freddie Mac for funding unexpected mortgage demand. These alternatives have been successfully employed by the housing enterprises in the past to meet their charter responsibilities. (See pp. 26-27.) 3. Freddie Mac takes issue with our definition of arbitrage, which is similiar to the definition of an arbitrage bond in the U.S. tax code. It said: “In the municipal bond context, however, Congress disapproved what it viewed as the inappropriate conversion of federal Treasury securities into municipal bonds, the proceeds of which were never applied for any legitimate governmental purposes. Freddie Mac’s nonmortgage investments, in contrast, directly serve our statutory purposes of providing liquidity and stability to the mortgage market.” Here, we again note that the relationship between the various nonmortgage investments and mission achievement is less clear for the longer term than the shorter term nonmortgage investments. Therefore, we have concluded that the analogy to municipal bonds in defining arbitrage is appropriate. 4. Freddie Mac disagrees with our use of the range of 30 to 106 basis points to represent the housing enterprises’ funding advantage on debt. Freddie Mac’s comments and our response appear in the previous GAO report establishing this range. In its comment letter in this report, it cited its analysis of the yield spread on 5-year bullet debt between housing enterprise debt and debt of A and AA rated companies to illustrate Freddie Mac’s disagreement with our range. Freddie Mac also noted that the data upon which this range was established are from the 1991 to 1994 time period. We first became aware of the Freddie Mac analysis when it was submitted for the record at congressional hearings on July 31, 1996, after we completed our previous (i.e., privatization) study identifying the 30- to 106-basis-point range. Although we have not thoroughly analyzed the methodology (including its emphasis on 5-year debt rather than debt issues with longer maturities) or data relied upon in the Freddie Mac analysis, we note that the findings on yield spreads from the Freddie Mac analysis are in the vicinity of the bottom of our range (see reference to the 36-basis-point average cited by Freddie Mac). In addition, our findings from our analysis of housing enterprise financial data during this assignment are consistent with the estimated funding advantage range we established in our previous report and rely upon in this report. The housing enterprise financial data and the Standard and Poor’s credit ratings we relied upon for our analysis during the course of this assignment are more recent than the data relied upon in the Freddie Mac analysis. 5. Freddie Mac disagreed with our “draft report’s estimates of potential arbitrage profits based on an assumed portfolio that is nearly 17 times larger than Freddie Mac’s current longer-term nonmortgage investments.” We provide this estimate as an example of how much a $10 billion fund, the authorized level for Freddie Mac’s core fund, can generate in arbitrage profits. We also make reference to the $2 billion forecasted level for year-end 1997 holdings in the core fund. The following comments represent our response to specific comments made on a draft of this report on January 12, 1998, by the Vice President for Regulatory Activities at Fannie Mae. 1. Fannie Mae stated that the yield on the longer term investments is less than that of mortgage investments. This simple yield comparison does not take into account the risks, in particular interest rate and prepayment risks, that accompany residential mortgage investments. Our report states that government sponsorship of the housing enterprises lowers yields on single-family, fixed-rate, conforming mortgages. This mechanism is not present for nonmortgage investments. Therefore, we conclude that on a risk-adjusted basis, some nonmortgage investments are more profitable than mortgage investments. 2. Fannie Mae said that we should amplify “the degree to which the corporation incents and holds management accountable for meeting mission obligations.” We did not review contracts of individual managers and members of Fannie Mae’s Board. Because of the proprietary nature of the information, we could not have provided concrete examples to make such an amplification. The following comments represent our response to specific comments made on a draft of this report on January 12, 1998, by the President and Chief Executive Office of Farmer Mac. 1. Farmer Mac stated that long-term investments with short-term interest rate resets are generally considered to have short-term liquidity. We have concluded from our analysis that Farmer Mac’s longer term, variable-rate, nonagricultural-mortgage investments are subject to greater market value fluctuations with changes in interest rates than short-term investments. As a result, they are less useful in facilitating liquidity in the agricultural mortgage market than short-term investments. 2. Farmer Mac suggested that a reasonable time frame for reaching the final stage of its debt issuance strategy could be 2 to 3 years following adoption of the strategy. As far as we know, this is the first time a time frame has been suggested as a reasonable period of time for the debt issuance policy to contribute to mission. 3. Farmer Mac took issue with our observation that Farmer Mac’s debt issuance strategy, which intends to lower the funding costs to purchase agricultural mortgages and issue AMBS, appears to us to contradict, at least in part, our observation that Farmer Mac has not offered higher prices for agricultural mortgages. Farmer Mac states that the agricultural mortgage origination market is currently very inefficient, and therefore Farmer Mac is directing funds made available by the debt issuance strategy toward expanded efforts to recruit new mortgage originators. We revised our discussion (see page 29) and, rather than referring to a possible contradiction, we now directly relate these observations to the uncertainty associated with the effectiveness of Farmer Mac’s debt issuance strategy on mission achievement. 4. Farmer Mac said that it agrees with our recommendation to the FCA Board to report on the relationship of Farmer Mac’s debt issuance strategy to the achievement of Farmer Mac’s mission. Its letter added that FCA already monitors Farmer Mac’s investment activity. We note, as indicated in our report, that FCA’s monitoring of Farmer Mac’s investment activity has focused on matters of safety and soundness. Our recommendation is specific to FCA’s mission oversight responsibilities. 5. Farmer Mac said that for the quarter ended June 30, 1997, Farmer Mac’s net income from nonprogram investments represented about 38 percent of total net income. In response, we made revisions (see pp. 5 and 31) and now state that nonagricultural-mortgage investments are a primary source of income rather than the principal source or majority of income. The following comment represents our response to a specific comment made on a draft of this report on January 16, 1998, by the Assistant Secretary for Housing-Federal Housing Commissioner at HUD. 1. HUD took exception to our characterization of the expertise issue in HUD’s approval of MPP. In particular, HUD’s comment letter stated that some statements in our draft report appear to be based on misunderstandings of statements made by HUD staff. HUD stated that HUD did not possess detailed knowledge of the intricacies of the life insurance industry at the time MPP was submitted for review. However, HUD also stated that it concluded it was unnecessary to provide MPP materials to Treasury because HUD had obtained sufficient information and analysis to complete its work. HUD said that this conclusion, rather than a HUD determination that it could not share the contents of the MPP proposal due to a potential conflict of interest within Treasury, formed the basis for not providing MPP materials to Treasury. We made revisions to our report (see pages 13-15) and state HUD’s position as described in its comment letter. The following comment represents our response to a specific comment made on a draft of this report on January 6, 1998, by the Chairman and Chief Executive Officer of the FCA Board. 1. FCA stated that discussion on pages 9 through 13 of our draft report indicated a misunderstanding of its past and current thinking and position, namely that FCA has clear safety and soundness authority and has concluded that no safety and soundness concerns exist at Farmer Mac. These FCA positions had been expressed in a previous section of the draft report dealing with safety and soundness oversight. The section FCA discusses relates to FCA’s general regulatory authority. In response to FCA’s comment, we added some clarifying language to our discussion on FCA’s general regulatory authority. Paul G. Thompson, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the nonmortgage investment activities at 3 government-sponsored enterprises (GSEs)--the Federal Home Loan Mortgage Corporation (Freddie Mac), the Federal National Mortgage Association (Fannie Mae), and the Federal Agricultural Mortgage Corporation (Farmer Mac)--focusing on the: (1) enterprises' legal authority for making nonmortgage investments and federal regulatory oversight of that activity; (2) relationship between nonmortgage investment policies and practices and missions of the enterprises; and (3) extent to which the enterprises have undertaken nonmortgage investments for arbitrage profits--using the funding advantage from government sponsorship to purchase nonmortgage investments that generate profits. GAO noted that: (1) legally, the enterprises have broad investment authority; (2) to date, regulatory oversight activities for the three enterprises have focused on whether nonmortgage investments are safe and sound and not on whether the nonmortgage investment policies and practices are mission-related; (3) the Department of Housing and Urban Development (HUD) has not developed criteria to determine if nonmortgage investments are consistent with enterprise charter purposes; (4) in October 1997, the Farm Credit Administration (FCA) indicated that it did not have concerns that Farmer Mac's nonmortgage investment activity is inconsistent with its charter mission, but FCA also stated that the debt issuance strategy associated with the investments is intended to be temporary and to develop over a reasonable period of time; (5) therefore, according to FCA, its position could change if over time evidence does not show that such investments play a role in helping Farmer Mac achieve its mission; (6) enterprises have invested in nonmortgage assets to varying degrees with somewhat different rationales for how these investments further their charter purposes; (7) each enterprise has an investment policy that specifies permissible credit ratings, maturities, and concentration limits and describes the relationship of investments to earnings and to achievement of the enterprise's mission; (8) Freddie Mac officials indicated that its nonmortgage investments have been held for cash management purposes and as an investment vehicle, which could make capital available to help fund future anticipated demand for residential mortgages; (9) the relationship between longer term nonmortgage investments and the enterprises' mission goals is not always clear, because long-term nonmortgage investments may not facilitate liquidity in the residential mortgage market as well as short-term investments; (10) however, it is clear that nonmortgage investments generate arbitrage profits; (11) in its analysis, GAO found that the various nonmortgage investments fall along a continuum representing the degree to which they facilitate liquidity in the residential mortgage market and thus are more clearly related to the enterprises' missions; and (12) GAO's review of compensation practices and board member responsibilities at the enterprises suggests that individual incentives to generate corporate profits are structured in a manner that is fairly typical of major corporations and financial institutions without federal charters limiting their activities.
Figure 1 shows DHS’s homeland security and information-sharing visions, missions, and goal. I&A is the lead DHS component with responsibilities for sharing terrorism- related information with all levels of government and the private sector. I&A performs a variety of functions related to information sharing, including gathering customer information needs, developing and distributing intelligence reports, and gathering customer feedback on the information I&A provides. I&A, along with the Office of the CIO, also has a key role in the overall governance structure DHS has created to manage information sharing throughout the department, which is discussed more fully later in this report. I&A is headed by the Under Secretary for Intelligence and Analysis who has responsibilities for, among other things, providing homeland security intelligence and information to the Secretary of Homeland Security; other federal officials and agencies, such as members of the intelligence community; Members of Congress; departmental component agencies; and the department’s state, local, tribal, territorial, and private sector partners, such as fusion centers. In addition to I&A, multiple other DHS components—such as ICE, U.S. Customs and Border Protection (CBP), and the Transportation Security Administration (TSA)—share information within and outside DHS on threats more specific to their mission areas, such as travel information. Among other things, these agencies develop and distribute intelligence reports about these areas to customers, such as the intelligence community. DHS is one of five key agencies responsible for establishing the ISE. Section 1016 of the Intelligence Reform Act, as amended by the 9/11 Commission Act, requires the President to take action to facilitate the sharing of terrorism-related information through the creation of the ISE.In April 2005, the President designated a Program Manager—within the Office of the Director of National Intelligence—to, among other things, plan for, oversee implementation of, and manage the ISE. In July 2011, we recommended that in defining a road map for the ISE, the Program Manager ensure that relevant initiatives individual agencies were implementing are leveraged across the government, among other things.recommendations and has actions under way to address them. DHS noted that the department remained committed to continuing its work with the Program Manager and relevant stakeholders to further define and implement a fully functioning ISE. The Program Manager generally agreed with our DHS’s Office of the CIO is responsible for the department’s information technology management and is developing the department’s enterprise architecture (EA), which is designed to establish an agencywide road map to achieve its mission. An EA can be viewed as a reference or “blueprint” for guiding an organization’s transition to its future environment that includes maximizing information sharing within and across organizational boundaries. Along with I&A, the Office of the CIO is responsible for overseeing this transition. integrate the information and standardize the format of the terrorism- related products of the department’s intelligence components. In October 2007, the President issued the National Strategy for Information Sharing, which identifies the federal government’s information-sharing responsibilities. The strategy calls for authorities at all levels of government to work together to obtain a common understanding of the information needed to prevent, deter, and respond to terrorist attacks. On the basis of the National Strategy, DHS developed a strategy in 2008 to direct the department’s information-sharing efforts and is drafting a Fiscal Year 2012-2017 DHS Information Sharing and Safeguarding Strategy. DHS plans to finalize and release this new strategy after the Executive Office of the President issues a new National Strategy for Information Sharing and Safeguarding, and DHS then plans to release an implementation plan that is to describe in more detail how the department will implement its strategy along with related milestones. DHS’s new strategy is intended to update the 2008 strategy to reflect the department’s growing and increasingly complex mission and include information safeguarding—in response to the release of classified and diplomatic documents by the website Wikileaks in 2010—as well as information sharing. DHS has established a decision-making body—the Information Sharing and Safeguarding Governance Board (the board)—that demonstrates senior executive-level commitment to improving information sharing. The board has identified information-sharing gaps and developed a list of key initiatives to help address those gaps, but additional steps could help DHS sustain these efforts. Board and department attention has helped achieve progress on many of the key initiatives, but funding challenges have slowed some efforts. DHS has also made progress in developing and implementing DHS’s Information Sharing Segment Architecture, but has not yet fully developed this architecture. The board plans to update the DHS Information Sharing Strategy and develop a related implementation plan, which will be important in managing information- sharing efforts. As of early September 2012, the new National Strategy for Information Sharing and Safeguarding had not been released. The Information Sharing and Safeguarding Governance Board serves as DHS’s senior executive-level decision-making body for information- sharing issues. According to the board’s charter, DHS established the board in 2007 to serve as the “arbiter of data access denials or delays that cannot be resolved at the component level” and to work with DHS operational components to monitor their information management processes and ensure respect for legal protections. In the aftermath of the release of classified and diplomatic documents by the website Wikileaks, in 2011 DHS revised the board’s charter to reflect its responsibility to govern both information sharing and safeguarding and expanded the board’s membership to incorporate components with information-safeguarding responsibilities. The board includes senior executive-level representation from almost every DHS component, as shown in figure 2. The Under Secretary for Intelligence and Analysis serves as the board’s chair. According to DHS officials, as the DHS representative to the interagency policy committee for information sharing, the Under Secretary brings knowledge of governmentwide information-sharing efforts. The DHS Chief Information Officer serves as the board’s vice chair, also bringing knowledge as the authority over DHS’s technology-related information-sharing projects. Board minutes show that senior-level officials attend the board’s quarterly meetings, demonstrating DHS leadership commitment to the board’s work. The board is responsible for approving the department’s information-sharing and -safeguarding vision and strategy, establishing information-sharing goals and priorities, and overseeing implementation across DHS components. According to DHS officials, the board periodically reports its results to the Secretary of Homeland Security. The Information Sharing Coordinating Council serves as an advisory body to the board and supports it by recommending policies and procedures for information sharing, preparing for board meetings, and helping to track information-sharing initiatives. The board has advanced DHS information sharing in several ways. First, the board has raised visibility—that is, has increased awareness of— information-sharing initiatives. Both the Office of the CIO and ICE officials noted that visibility improves stakeholder coordination across initiatives and facilitates access to high-level officials who can help initiatives overcome roadblocks. For example, ICE officials said that the board has increased the visibility of LEISI—DHS’s main initiative for sharing law enforcement information with state and local partners—and that other DHS components are now more likely to coordinate with LEISI in their law enforcement information-sharing activities. An official from the Office of the CIO also noted that the board provides information-sharing initiatives with organizational support at higher levels across DHS, which can remove roadblocks within or across components. For example, the official noted that one information-sharing initiative—the Homeland Security Information Network (HSIN), which DHS uses to share information with federal, state, and local partners—cannot succeed without this visibility and now has better stakeholder coordination than ever before. Second, according to DHS officials, the board has helped to reduce redundancies across DHS components. For example, through board activities, members recognized that DHS components were independently developing over 20 systems to collect, share, and display the information that components and other stakeholders need to plan for and respond to threats and hazards, known as Common Operating Picture systems. The board worked with the components involved to examine each component’s Common Operating Picture systems and identify opportunities for cooperation, thereby reducing redundancies and saving funds. In February 2012, the board also established an Information Sharing Environment Coordination Activity—with staff from I&A and the Office of the CIO—to facilitate decision making related to DHS’s Information Sharing Segment Architecture transition plans. The group’s responsibilities include developing recommendations and advising on policy development, resource allocation, acquisition management, and program management processes. In addition, the group is responsible for assessing whether departmental investments in new and existing technology programs include critical information-sharing capabilities, and whether investments present opportunities to deploy capabilities as enterprise services, such as computer-to-computer mechanisms to deliver information between systems. We discuss this group’s role in several information-sharing efforts later in this report. Because the group is relatively new, it was too early for us to determine its impact. DHS’s actions to establish an information-sharing governance structure and related activities demonstrate DHS leadership’s commitment to improving information sharing. DHS has identified a list of initiatives that it determined are key to advancing information sharing within the department and with its customers, which DHS refers to as its Information Sharing Roadmap. According to DHS officials, to develop this list, the board hosted a series of meetings from April 2010 through December 2011 with relevant components in each of its five mission areas. According to I&A officials, these meetings included in-depth conversations with DHS and component executives about their information-sharing activities and gaps, and presentations from subject matter experts on these issues. The board selected a list of 22 initiatives that it determined represented DHS’s greatest opportunities to improve information sharing. Some of these initiatives were information-sharing programs that components were already implementing as part of their mission activities, while others were new projects designed to address specific information-sharing gaps. According to DHS officials, the process of identifying departmental information-sharing gaps evolved progressively over the course of 2 years as the board continually sought to improve its methods. According to DHS officials, in July 2011, the board’s chair requested that board members prioritize the list of initiatives and select a smaller and more manageable list to receive additional support in the DHS budget process. Using a weighted scoring and voting system, each board member selected 5 top-priority initiatives based on four criteria: cross- departmental impact, linkage to mission areas, enterprisewide information-sharing enabler, and level of DHS component support. After compiling these rankings across members, the board determined that 8 initiatives were clustered near the top of the list and established these initiatives as its priority efforts, as shown in table 1. To improve the process it used to select the priority initiatives, DHS formed the Criteria Working Group in 2011. According to the working group’s briefing materials, the group developed new criteria for selecting priority initiatives—such as mission criticality and feasibility—that it will use in future prioritization efforts and a new process for integrating component input on which initiatives to choose as priorities. According to DHS officials, the board also recognized the need to periodically add and remove initiatives from the broader list of key information-sharing initiatives and developed and documented processes to do so. Therefore, in December 2011, the board elected to begin reviewing the list of initiatives on a semiannual basis, evaluating the initiatives for continued relevancy and considering newly emerging requirements. According to I&A officials, the board could remove an initiative from the list because (1) the initiative has “graduated”—that is, it has achieved all of its information-sharing goals or (2) the initiative has languished because components have not provided needed funding or DHS did not have a lead component to manage the initiative. These latter initiatives would be removed from the list and set aside for potential reevaluation if a component agrees to lead the initiative at a later date. In May 2012, DHS issued the Information Sharing and Safeguarding Roadmap Implementation Guide to document and describe goals and elements of the list of key initiatives and provide guidance for development, management, and oversight of the list. In 2012, the board added 5 new initiatives to the list in order to reflect the board’s new emphasis on information safeguarding. consolidated 3 initiatives into a single initiative, split 1 initiative into 2 separate initiatives, and removed 3 from the list because, according to I&A officials, they were better handled by other entities and no longer required board involvement. As of September 2012, DHS had 18 key information-sharing initiatives and 5 safeguarding initiatives on its list of key initiatives. DHS’s efforts to identify information-sharing gaps and select initiatives to address them have advanced information-sharing efforts, but additional steps could help DHS sustain these efforts. First, DHS has not documented its process for identifying information-sharing gaps in each of its mission areas or the list of gaps it identified. Documenting this process and its results could help DHS replicate and sustain this process in the future. Federal internal control standards require agencies to clearly DHS officials noted the board did not document significant activities. document the process because its efforts were in early stages and the process was revised as the board learned from experience. Processes for selecting key information-sharing initiatives are documented in DHS’s Roadmap Implementation Guide—the department’s policies and procedures for managing key initiatives. However, because DHS’s assessment of gaps drives the selection of key information-sharing initiatives, documenting the process for identifying gaps and the results of that process in the Roadmap Implementation Guide or other related policies and procedures would provide DHS with an institutional record to better replicate, and therefore sustain, a key step in its efforts to improve information sharing. DHS’s five safeguarding priorities in the 2012 list of initiatives are Address the Insider Threat, Improve Access Control, Improve Enterprise Audit, Reduce Removable Media Use, and Reduce User Anonymity. GAO/AIMD-00-21.3.1. Second, DHS did not analyze the root causes of information-sharing gaps to ensure that its key initiatives target the correct problems. According to DHS officials, DHS did not do this because the root causes of DHS’s information-sharing problems—such as challenges in incorporating diverse agencies into a single department—are well known and have been discussed at high levels within the executive branch, in the context of the formation of DHS, in the 9/11 Commission Report, and through subsequently enacted laws. These broad, overarching issues help inform DHS’s efforts to improve information sharing, but documenting and implementing a process for analyzing the specific causes of DHS’s information-sharing gaps within each mission area would help DHS ensure that it invests in the correct information-sharing solutions. For example, diagnosing whether specific gaps are caused by DHS’s own funding decisions and constraints, by its organizational structure, or by technological limitations would allow DHS to better choose appropriate solutions. Furthermore, our work on high-risk programs has shown that analyzing root causes of program gaps or limitations can help in designing effective solutions to reduce risks. GAO/AIMD-00-21.3.1. managed by another entity within the department, which mitigated the risk of removal. Nevertheless, as we describe in the following section, funding and other constraints may require DHS to remove items from the list in the future, and establishing and documenting processes for potential future use could help guide these decisions. DHS officials stated that such processes could improve information-sharing efforts. By establishing and documenting processes for identifying and assessing the risks of removing an incomplete initiative from its list and working to mitigate that risk, DHS could be better positioned to identify the effects that removal may have on its information-sharing efforts and sustain these efforts. Since DHS developed its list of key information-sharing initiatives, many of those initiatives have proceeded and met interim program milestones. As of June 2012, 15 of the 18 key information-sharing initiatives met at least one interim milestone, and DHS fully completed 1 initiative— developing a training course designed to improve and increase sharing of terrorism information by promoting a culture of awareness. However, as shown in figure 3, progress has slowed or stopped for 10 of the 18 key information-sharing initiatives presented to the board in June 2012. Funding constraints are a primary reason why progress has slowed or stopped for some initiatives. For example, among the 8 priority information-sharing initiatives, 5 faced risks as of June 2012 because of lack of funding, and DHS has had to delay or scale back at least 4 of them. More specifically, according to ICE documents, LEISI has met milestones related to several activities—including developing a strategic plan, implementing a performance metrics tracking system, and expanding information sharing with federal, state, and local partners—but inadequate funding threatens the ability of ICE to further expand the LEISI user base and share additional data, among other things. Also, DHS’s top information-sharing priority (CHISE)—an initiative to develop an integrated, searchable index to consolidate and streamline access to intelligence, law enforcement, and other information across DHS—has not been fully funded, but efforts to explore possible funding options continue, according to DHS officials. The officials noted that CHISE is intended to streamline access to terrorism-related information and help analysts synthesize this information. The officials added that until CHISE is developed, analysts will continue to separately access numerous data sets from across the department, which requires a larger number of analysts, is more time consuming, and may result in missing connections among data in different data sets. According to I&A officials, for the fiscal year 2012 budget, the board made a concerted effort to advocate for additional funding to support priority initiatives and emphasize information sharing during the DHS planning and budgeting process. According to I&A officials, the board was not able to obtain increased funding for the initiatives but plans to continue its efforts. The officials noted that the board does not have budget authority within the department, and therefore does not have the authority or resources to fund the priority initiatives. They explained that under the DHS budget process, the initiatives are considered integral to, and not separate from, an agency’s fundamental mission activities and are funded through the DHS components responsible for each initiative. Thus, according to the officials, in a constrained budget environment, components are faced with difficult decisions in deciding whether to fund mission activities or information-sharing activities. However, DHS officials stated that the board’s involvement has kept some of these initiatives from experiencing funding cuts. In addition, as we reported in July 2012, the board serves as the portfolio governance board for information sharing, which provides guidance and investment recommendations for future year planning, programming, and budgeting. According to DHS officials, as the department’s information technology governance process matures, the board will have a more formal role and processes for affecting funding decisions. Moving forward, DHS plans to collect and publish data on the annual and long-term funding the department budgets and spends on its information- sharing and -safeguarding programs and activities. According to DHS officials, the ability for the department to generate reliable cost estimates for these sharing and safeguarding programs and activities will lower the risk to the public and minimize overruns, missed deadlines, and performance shortfalls. The officials added that cost estimates will also allow decision makers to prioritize future investments and demonstrate a continued commitment to support the capability and capacity of DHS components to share and safeguard information. These cost estimates could also allow us to determine the extent to which DHS has the capacity to implement its plans. We will continue to monitor DHS’s implementation of these plans and its ability to address funding shortfalls for key initiatives, particularly in a challenging budget environment. DHS has developed architecture guidance to support the implementation of its target DHS information sharing environment. Specifically, in May 2009, DHS published version 2.1 of its Information Sharing Segment Architecture. In July 2011, we reported that the Segment Architecture did not include key architecture content, such as a transition plan for moving to the target DHS information sharing environment and a conceptual solution architecture that provides an integrated view of proposed systems and services.in addressing the missing architecture content. For example, in January 2012, DHS updated its Segment Architecture to include a transition plan that provides a conceptual road map to implement the key capabilities needed to achieve the target DHS information sharing environment. In response, DHS has made important progress DHS has also taken actions to identify and define its key business and information requirements, an initial important step in building an effective architecture to determine technology solutions it will need to achieve its information-sharing goals. According to guidance issued by the Program Manager for the ISE, agencies should create an inventory of assets to effectively share terrorism-related information. According to the executive director of the DHS Information Sharing Environment Office, DHS has completed an inventory of the data assets (e.g., databases containing terrorism-related information) that each of the components across the department owns, such as border-crossing records. More specifically, DHS has cataloged more than 800 data assets across the department and identified the basic information available in each asset. Also according to the executive director, the Information Sharing Environment Coordination Activity will then determine with what other stakeholders DHS needs to share these data assets. DHS has determined that 80 of the data assets contain information with potential value in counterterrorism efforts. Of those 80, DHS identified the top 20 most valuable data assets and included them in the CHISE initiative, which is to organize these data assets into searchable indices to facilitate fast information retrieval. Since 2008, we have reported on the importance of agencies taking an inventory of what information they own as the first step to then determining who needs to have this information and how agencies will share it with key partners. DHS’s inventory efforts should help it to more systematically determine where it has gaps in sharing or additional opportunities to use the information it owns to protect the homeland. DHS has also developed a conceptual solution architecture, which, according to the guidance issued by the Program Manager for the ISE, is to provide an integrated view of the combined systems, services, and technology for the target ISE, as well as the interfaces between them. This conceptual solution architecture provides an integrated view of systems, such as Homeland Secure Data Network, and services, such as Enterprise Service Bus message services, which allow information to flow among disparate applications across multiple hardware and software platforms. This is important since it defines specific technology resources for implementing DHS’s information sharing environment. In addition, DHS officials stated the department is using its shared space to share terrorism-related information with other agencies. For example, the officials stated DHS plans to use its Suspicious Activity Reporting (SAR) shared space to share SAR data with the Department of Justice. The ISE Terrorist Watchlist mission business process is a component of the identification and screening mission process and encompasses the receiving and sharing of reported information and the nomination, export, screening, encounter, redress, and updates to the Terrorist Screening Database. The FBI’s Terrorist Screening Center maintains this database of known or suspected terrorists, which is used during security- related and other screening processes. Program Manager for the ISE issues a national-level standard that describes business context and information exchanges for AWN. According to the Deputy Program Manager for the ISE, the Office of the Program Manager plans to work with DHS and other agencies on the development of standard information exchanges for AWN in fiscal year 2013. The alignment of DHS data assets with the ISE mission business processes is important because it would support better discovery and sharing of relevant terrorism-related information. In addition, while DHS has developed a conceptual solution architecture, it has not yet determined how well its current systems and technology environment support target business and information requirements. According to guidance from the Program Manager for the ISE, ISE agencies should assess the systems and technology environment for alignment with business and information requirements. According to DHS officials, from April through July 2012, the DHS Information Sharing Environment Coordination Activity conducted an initial baseline assessment of major programs to determine whether current systems and technologies can satisfy target architecture requirements, such as business and data requirements. Also according to DHS, it will review other segment architectures (e.g., screening) being developed to assess alignment with information-sharing capabilities described in the information-sharing architecture. By taking these actions, DHS could achieve cost avoidance and cost savings in implementing the DHS information sharing environment. DHS’s activities to assess gaps, select initiatives, and ensure that information-sharing programs have the capabilities needed to promote sharing are in the early development and implementation phases. As a result, DHS is taking steps to institutionalize some of its policies and practices, including developing key strategies and plans, that will be important in planning and managing its information-sharing efforts. In our September 2010 letter to DHS, we stated that DHS should develop a strategy and commensurate plans to achieve its information-sharing mission, among other things. According to DHS officials, the department is taking steps to update and develop other strategies and related plans in addition to its list of key information-sharing initiatives that could address steps we have identified for DHS to take in information sharing. For example, as discussed earlier in this report, DHS is working to update the DHS Information Sharing Strategy, in part to be consistent with governmentwide efforts to update the related National Strategy. DHS officials stated that they expect to issue the updated strategy after the National Strategy is released, although the date of this latter action is uncertain. In deliberating on the updates, DHS is working to ensure that the DHS strategy outlines its information-sharing vision and mission, and addresses important components, such as goals and objectives on sharing and safeguarding information, methods it plans to achieve key outcomes as well as manage any potential risks, and steps it plans to take to ensure efforts receive the resources they need. Subsequent to releasing its strategy, DHS plans to release the Information Sharing and Safeguarding Implementation Plan within 90 days that is to describe in more detail how DHS will implement its strategy and include related milestones for the efforts described in the plan. We will continue to monitor implementation of these strategies and plans for taking corrective actions to improve information sharing. DHS is tracking the progress key information-sharing initiatives are making toward interim milestones but the department generally does not track when the initiatives will be completed, so as to make course corrections if completion dates are delayed, or assess what impact they are having on achieving needed sharing. DHS also has taken several steps to implement the information-sharing capabilities it needs to share information but has not yet defined the level of capabilities that initiatives and other programs must have in place to help it achieve the department’s information-sharing vision. Customer feedback can help assess information sharing by indicating how useful customers find the products DHS disseminates; DHS has taken steps to survey its customers to determine their satisfaction as well as assess their needs. DHS has not yet developed measures that determine the impact of sharing on its homeland security efforts, but plans to develop more meaningful ways to assess information-sharing results and progress toward achieving its vision. Our work has shown that being able to track the progress of initiatives that address program barriers as well as assess the effectiveness of initiatives, or the results they achieve, can help agencies minimize the risks in key programs such as information sharing. DHS is tracking the implementation progress of key information-sharing initiatives, but the department does not track how close the initiatives are to completion and could better assess how the initiatives are improving information sharing or helping DHS achieve its 2015 vision, which includes ensuring that the right information gets to the right people at the right time. DHS has developed a tool to track implementation of key information- sharing initiatives, referred to as Roadmap Quad Charts, but it does not include information on how close the initiatives are to completion. According to I&A documents, the purpose of the charts is to report an initiative’s implementation progress to the board. Components are responsible for providing the information tracked in the charts and submitting monthly updates to I&A. The tool contains an overall health indicator, key milestones, risks, and other data, as shown in figure 4. The left quadrants of the chart define interim activities and milestones, and track progress toward both. Components categorize the health of each initiative as having no impediments (green), or that its progress has slowed (yellow) or stopped (red). The right quadrants contain narrative information, including issues facing the initiative—such as inadequate funding or technological or legal difficulties encountered—and risks to progress, such as the impact of an initiative’s inability to meet time frames. The board reviews the Quad Charts on a quarterly basis to track and oversee progress, and can question components on the initiatives and the status of milestones. For example, one initiative (Common Operating Picture/User-Defined Operating Picture Integrated Project Team) experienced challenges in setting milestones, which was reflected in its chart. Subsequently, the board pushed the relevant components to set more aggressive milestones, and, as a result, DHS expects to begin transitioning components from over 20 different common operating pictures to about 5 common operating pictures in March 2013, which, according to DHS officials, is earlier than would have been possible without the board’s involvement. When the transition has been completed, DHS will have streamlined the applications that collect, share, and display the information components need to plan for and respond to threats and hazards, which will increase efficiencies, according to DHS officials. The Quad Charts track progress that initiatives are making toward interim activities and milestones, but do not include information regarding completion dates or what difference the initiatives are making in improving information sharing. For example, a LEISI program official said that LEISI identifies milestones for the charts that can be accomplished each year, but the LEISI chart does not show how much closer that year’s targets will advance the initiative toward completing its information- sharing functions. Including completion dates in the charts could help the board better understand the overall progress initiatives made, make more informed decisions on which initiatives it will advocate should receive additional funding, and generally provide better oversight by holding components accountable for these completion dates. In addition, the charts do not provide information on how effective initiatives have been. For example, the charts do not provide a sense of any improvements in how ICE shares law enforcement information with key stakeholders as a result of implementing LEISI, such as how many more data sets are available to share or the increase in the number of users with access to these data sets. Including such information in the Quad Chart could help the board assess how initiatives improve DHS information sharing, including the impact of any risks identified in the chart. According to DHS officials, the lower left quadrant of the chart is intended to show longer- term activities and milestones leading toward completion, but our review shows that 15 of the 18 initiatives did not have completion milestones as of June 2012. DHS officials stated that it will not be possible to identify completion dates for some initiatives, such as for CHISE, because they are in the early planning stages and responsible components cannot yet estimate their completion. Moreover, other initiatives, such as the Nationwide Suspicious Activity Reporting Initiative, are secretarial priorities that DHS will not remove from the list of key initiatives because they are ongoing initiatives with no date for completion. DHS officials recognize they need to better track the progress of key initiatives and assess how they affect sharing with customers, but related efforts are just beginning and DHS did not have further details on what changes they will make. Program management practices note the importance of establishing a timeline for program milestones and deliverables, including when a program is complete, which helps lay the groundwork for the program and position it for successful execution. These practices also note that it is important to track intermediate and final results of a program as well as the benefits a program delivers, which helps ensure the organization will realize and sustain the benefits from its investment. We recognize that completion dates cannot be provided in each case. However, determining and documenting initiative completion dates and assessing how initiatives affect sharing, where feasible, would help the board better track progress in implementing the initiatives, make any necessary course corrections if completion dates are delayed, and demonstrate how initiatives enhance information sharing and homeland security. In addition to identifying and tracking key information-sharing initiatives it needs to implement, DHS has also taken several steps to assess the capabilities that programs need so that key partners can access and share information the department owns. First, DHS has begun to assess the extent to which its technology programs have implemented critical information-sharing capabilities. DHS officials stated that from April through July 2012, the Information Sharing Environment Coordination Activity conducted initial baseline assessments of approximately 160 technology programs, systems, and initiatives—which include the key information-sharing initiatives—to determine the extent to which they have critical information-sharing capabilities in place. Capabilities include, for example, ways to determine that a user who is trying to access DHS information is authorized to access it and the ability to subsequently audit or track who has accessed this information. DHS officials noted that the Office of the CIO and board plan to track the progress that individual information-sharing programs and initiatives achieve in implementing these capabilities, as applicable, and develop a mechanism to provide DHS better visibility over the capabilities that programs have implemented and still need to implement. DHS officials stated that they plan to introduce this capability-tracking mechanism in early 2013. DHS’s planned capability-tracking mechanism may not include an important step to help DHS determine its progress toward its 2015 information-sharing vision. The Information Sharing Segment Architecture Transition Plan discusses major milestones and time frames for implementing the critical capabilities in order for DHS to achieve its information-sharing vision by 2015. However, this plan does not detail— and DHS officials said that they have not determined—the specific capabilities each particular program must implement for DHS to conclude that it has improved information sharing enough to achieve the 2015 information-sharing vision. For example, the transition plan notes that DHS is to have begun developing the framework for establishing how to authorize user access by the end of fiscal year 2012, but it does not include which programs this capability is relevant for, and how many of them must implement this capability for DHS to be able to conclude that it has made meaningful progress in that capability by 2015. DHS officials recognize the importance of measuring progress toward the 2015 vision, but the department’s efforts to define critical capabilities are new and it has not yet taken this step. Including this step in the department’s efforts to develop its capability-tracking mechanism would help DHS better understand which programs to prioritize to improve information sharing. Our past work and the experience of leading organizations have demonstrated that measuring performance allows organizations to track progress they are making toward intended results—including goals, objectives, and targets they expect to achieve—and gives managers critical information on which to base decisions for improving their programs. The Information Sharing Environment Coordination Activity charter also notes that this group is to provide the board with the ability to prioritize and oversee steps DHS is taking to achieve its information- sharing vision. Determining the specific capabilities certain programs must implement in order for DHS to achieve its 2015 vision and subsequently tracking annual progress could help DHS prioritize programs and track and assess progress toward ensuring that the right information is getting to the right people at the right time to meet their homeland security responsibilities. Second, in addition to tracking the capabilities of its own programs, DHS, in conjunction with the Department of Justice, is collecting information on the extent to which fusion centers are putting in place certain capabilities that the two agencies and other federal interagency partners have determined are critical for ensuring these centers can effectively operate in a national information-sharing network. States and major urban areas originally created fusion centers to provide information about threats within the centers’ jurisdictions. The federal government, particularly through DHS, has been leveraging such centers to further disseminate federal information on threats and to collect information on threats and pass it on to federal agencies, among other things. I&A collaborated with the fusion center directors and their interagency partners to design and implement the 2011 Fusion Center Assessment, which is to help DHS track the progress of fusion centers in achieving key capabilities. These include the capability to receive, analyze, and further disseminate information on terrorist threats and crimes that can be precursors to terrorism. DHS completed its initial assessment in October 2011 and issued a report on its results in June 2012. The assessment found that overall capability scores for the 72 fusion centers that participated ranged from 29 to 97 out of 100, with an average score of 77. The report stated that the national network is a long-term investment and made recommendations on how DHS and its federal interagency partners can help fusion centers fill gaps over the next 4 years. DHS officials said that they will look at trends in individual fusion center scores to identify what capability gaps exist across the National Network of Fusion Centers and work with centers to focus any federal resources they receive on filling these gaps. DHS plans to monitor the improvements that centers make over time in filling capability gaps as an indicator of the effectiveness of fusion centers. Information-sharing and access agreements are vehicles used by DHS to exchange, receive, and share information from external (non-DHS) parties. outcomes of executing these agreements. Specifically, DHS plans to assess customer satisfaction of the recipients of multiple data sets received through these agreements beginning in October 2012. DHS’s key initiatives and capabilities should help to increase the department’s ability to make components’ information available to important customers, and to disseminate components’ products and reports created for these customers. However, determining whether the right people have the right information at the right time requires obtaining views from customers about the accuracy, usefulness, and timeliness of information provided and shared. DHS components are in the process of implementing customer feedback mechanisms that should help to provide customers’ perspectives of how well DHS is meeting its 2015 vision. DHS has taken steps to survey customers to measure how satisfied they are with the information and intelligence products that DHS components disseminate, such as homeland security assessments and homeland information notes. measures that help to gauge the usefulness of the information provided. DHS recognizes that there is a potential for bias in survey results, but DHS is taking steps to obtain feedback in additional ways, such as meeting with its customers to assess their needs, as a means to improve intelligence products. Homeland security assessments provide in-depth analysis based on detailed research. Homeland security notes provide information or analysis on a recent or current event or development of interest to DHS customers. components are following suit. Component surveys include a common question that asks customers to rate satisfaction on a five-point scale— very satisfied, somewhat satisfied, neither satisfied nor dissatisfied, somewhat dissatisfied, or very dissatisfied—and DHS customer satisfaction performance measures report the percentage of intelligence products rated somewhat satisfied or very satisfied. DHS plans to aggregate survey results on this question from across the DHS Intelligence Enterprise components, use the data as a gauge on how the information provided contributed to success in achieving goals for missions areas—such as preventing terrorist attacks—and publish the results in the department’s Annual Performance Report as performance measures, beginning in 2013. For example, TSA disseminated about 11,000 incident reports that pertain to preventing terrorist attacks during the first two quarters of 2012 and received about 5,800 responses. Over the same time period, I&A distributed 41 reports pertaining to preventing terrorist attacks and received over 700 responses.that customers who responded to the surveys said that they were generally satisfied with the reports they reviewed during that time frame. I&A data for fiscal year 2011 also show that customers said they were generally satisfied with products disseminated that year. These customer feedback mechanisms should help to provide customers’ perspectives of how well DHS is meeting its 2015 vision. However, I&A recognizes that the survey results may not be representative of the entire population of customers that received those products because customers voluntarily choose whether or not to provide feedback. In internal documents and external reports on customer feedback, such as the I&A annual report to Congress, I&A cautions readers that survey results are subject to bias that prevents the organization from drawing conclusions about the entire I&A customer population.customer read a product in order to take the survey—meaning that the feedback of those who read the product and chose to provide feedback may not be representative of those customers that decided not to read an I&A product. Given this potential for bias in I&A data, any performance measures drawn from that data will carry that bias, providing DHS, Congress, and taxpayers with a potentially incomplete account of progress made in improving information sharing. According to DHS officials, because of technological limitations in tracking the dissemination of products, I&A does not know the number of recipients or readers of each product, which prevents I&A from knowing the full impact of this bias. For example, a bias is created by the requirement that a I&A has taken a number of steps to obtain feedback in other ways and help ensure it provides customers with the right information at the right time. For example, according to I&A officials, I&A has initiated a core customer study designed to establish a common definition of core customers, allowing I&A to identify and directly survey representative samples of customers from across each segment on their satisfaction with I&A’s intelligence support. However, the study is in the beginning phases; thus I&A has not yet established a completion date and it is too early to evaluate the results. In addition, I&A has established a Customer Feedback Working Group to analyze feedback-related issues and devise ways to improve products. For example, on the basis of feedback that I&A products did not contain enough relevant local content, the group has begun a project to improve the regional content in intelligence products, according to I&A officials. Further, I&A conducts targeted surveys on high-interest topical issues to assess its performance on sharing terrorism-related intelligence and information. Our discussions with various DHS customers indicate varying levels of satisfaction with terrorism-related information from DHS and its components, including I&A and TSA. According to DHS officials, the department has prioritized its customers, and the department funds information-sharing initiatives according to these priorities. This, in turn, can affect how relevant some of the customers find DHS and its components’ information to their mission. For example, fusion centers are higher-priority customers than customers in the intelligence community, such as the FBI, according to the I&A Strategic Plan. As a result, DHS officials stated that the department focuses more of its funding and initiatives on fusion centers. We interviewed senior officials from 10 state and major urban area fusion centers, ICE, ODNI, and the FBI. We supplemented our discussions with additional information, such as the results of a 2012 fusion center survey about counterterrorism intelligence and our prior survey on TSA customers. The results of our analysis are summarized below. Fusion centers: Directors and other senior officials in 8 of 10 fusion centers we spoke with generally found I&A information to be useful. For example, officials at 1 fusion center reported that I&A products keep officials up to date on national and global terrorism trends that may have an impact on their region. In addition, officials at another fusion center stated that reports, such as the Joint Intelligence Briefing from DHS and the FBI on the 10th anniversary of 9/11, and special assessments of security at sporting events, have helped the fusion center provide guidance to state and local law enforcement. Further, in response to an I&A report on radicalization of prison inmates, 1 fusion center’s detectives met with corrections department staff to enhance their awareness of prison radicalization and held trainings on suspicious activities and radicalization indicators. Moreover, officials at this center noted that the timely dissemination of reports has improved, that reports are more specific to regional needs than in the past, and that I&A has responded to fusion center feedback. However, officials at 2 other fusion centers we met with stated that I&A information was not always timely. These officials reported that sometimes, I&A information is already available through media outlets or other information sources. According to one official, although this practice can be considered a method to verify the recent news media information, the volume of information tends to flood the network and can lead to reduced attention being paid to I&A products. In addition, officials at 2 fusion centers we met with reported that I&A distributes too many reports that are not specific to their region. Further, results from a 2012 Homeland Security Policy Institute survey that asked fusion center staff to order their most important sources of information suggest that DHS may have opportunities to better meet customer needs. On the basis of the fusion center officials who responded—which, according to survey authors come from traditional law enforcement backgrounds that may influence their rankings—DHS ranked sixth after sources such as law enforcement and Joint Terrorism Task Forces. Other sources, such as the National Counterterrorism Center and other fusion centers, ranked lower than DHS. TSA customers: We previously reported on the extent to which TSA customers are satisfied with the security-related information products they receive and found that they were generally satisfied. Specifically, TSA has developed a series of products to share security-related information with transportation stakeholders, such as annual modal threat assessments that provide an overview of threats to each transportation mode—including aviation, rail, and highway— and related infrastructure. Fifty-seven percent of the customers we surveyed (155 of 275 who answered this question) indicated that they were satisfied with the products they receive. ICE: ICE directors and analysts in the Homeland Security Intelligence Office did not comment on the information contained in I&A reports, but noted that they were generally dissatisfied with I&A reports primarily because they found it difficult to determine which reports are most relevant to their needs. For example, the officials stated that I&A is not proactive in informing ICE about the products it completes and would find useful. ICE officials stated that connectivity and access to I&A products have improved since 2010, but the ease of finding these products and understanding what is relevant to ICE remains problematic. ODNI: ODNI officials stated that they were generally satisfied with the department’s responsiveness to information needs and that collaboration with DHS has improved since 2010. For example, if circumstances necessitate ODNI obtaining passenger manifest data, DHS provides such information more quickly than in the past. In addition, ODNI has successfully used DHS data to counter potential terrorist threats. For example, by cross-checking refugee application data from DHS with other data, ODNI has facilitated numerous arrests and removed over 500 people who posed a potential threat from the refugee stream prior to their arrival in the United States. However, ODNI officials stated that some DHS intelligence reports are not timely enough for their needs. Further, DHS’s finished intelligence products are generally not as valuable to the intelligence community because they are generally written for state and local customers. FBI: Two FBI headquarters divisions responsible for sharing terrorism- related information reported on their satisfaction with information from DHS. Specifically, officials from one of the two FBI divisions reported that, overall, the division was neither satisfied nor dissatisfied with I&A information, and officials from the other division reported their division was somewhat satisfied. These same officials also reported that they were very satisfied with the information received from CBP, ICE, and TSA. For example, the FBI officials reported that its Counterterrorism Division and ICE have enhanced the consistency with which information is shared and have worked toward a transparent and coordinated effort for developing, sharing, and distributing terrorism- related information. The FBI reported that DHS intelligence products are generally not produced for the FBI’s use specifically, and that the FBI collaborates with DHS to develop reports on a variety of topics, such as potential terrorist attacks. DHS also monitors the extent to which I&A finished intelligence products address issues that state, local, and tribal customers deemed as most critical to their needs, which could increase customer satisfaction with products. Customers articulate their critical needs based on 10 threat- based categories, such as Terrorism and Illicit Drug Operations. I&A tags its intelligence products and information reports with relevant “standing information needs” prior to distribution, which enables I&A to monitor the extent to which I&A is distributing products and reports that match customers’ needs. The 2011 annual performance report shows that I&A determined that 85 percent of finished intelligence products were directly responsive to its state, local, and tribal customers’ information needs, which met the performance target for this measure. I&A data show that the department reached similar conclusions during the first two quarters of 2012. According to DHS officials, additional components are beginning to tag their information reports and intelligence products with relevant standing information needs, which will enable DHS to assess departmentwide contributions to addressing crucial customer needs. I&A also provides customers with information based on specific requests and collects data on the extent to which I&A is timely in its responses and customers are satisfied with those responses. Customer satisfaction is based on three factors: quality of communication, the accuracy of the information provided, and satisfaction with the process. Specifically, customers request certain information from I&A—such as background information for a person of interest—and I&A officials are to respond to that request by an agreed-upon time frame. The 2011 annual performance report shows that I&A answered 85 percent of requests within the time frame I&A and the customer agreed upon to the customer’s satisfaction. Since I&A is currently updating this measure to include other DHS entities, 2012 is a baseline year that the department plans to use to evaluate the extent to which timeliness and satisfaction of information requests are improving over time. Therefore, this measure should help DHS determine to what extent customers are getting the right information at the right time. DHS has plans that could help it better assess the impact of the department’s information sharing on homeland security. After DHS releases its new Information Sharing and Safeguarding Strategy, the department plans to develop and implement a new DHS sharing and safeguarding performance management program that is to include the development of performance measures that determine the outcomes its information sharing is to achieve. Our work has shown that DHS is evolving from utilizing process measures that are relatively easy to implement—for example, counting the number of issued reports—to more meaningful measures that determine customer satisfaction with the usefulness of the information provided. Demonstrating results is a standard practice in performance measurement. DHS continues to recognize that it must develop measures that demonstrate the results of its efforts, and department officials noted that such measures will be a crucial part of the Information Sharing and Safeguarding Implementation Plan the department is to develop. Specifically, the department’s draft planning documents note that the board is to develop information-sharing outcome measures to determine whether federal and nonfederal customers receive DHS information that is timely, accurate, trusted, and useful; meets their needs; and contributes to securing the homeland. For example, DHS could enhance its customer satisfaction performance measures by asking customers what difference the product they reviewed made on their ability to ensure a safe and secure homeland. The board is also to develop measures that assess the impact of information sharing on preventing terrorism and enhancing security, as well as other missions. Further, the board is to develop measures that assess the degree of budget and outcome alignment, and calculate the cost of achieving information-sharing outcomes and target levels of performance. We will continue to monitor DHS’s efforts to assess the results and impact of its sharing efforts. Our work has shown that having the ability to monitor progress and demonstrate results helps to lower the risks posed from implementing programs critical to the nation, such as the sharing of information on terrorist threats. Executing its plans to develop better measures should help DHS assess the progress in sharing information and monitor the extent to which the department is achieving its 2015 vision to provide the right information to the right people at the right time. Ensuring that terrorism-related information is shared in an efficient manner with stakeholders across all levels of government, the private sector, and foreign countries is a challenging and critical task. DHS has demonstrated a strong commitment to advance information-sharing efforts; its key information-sharing initiatives have made progress, and most have met interim milestones. The department has also taken steps to track its information-sharing efforts and developed information-sharing performance measures that monitor the effectiveness of some information-sharing efforts. However, additional steps could help DHS sustain these efforts. For example, in its Roadmap Implementation Guide or other policies and procedures, documenting processes for identifying information-sharing gaps and the results; documenting and implementing a process for analyzing the root causes of those gaps; and establishing and documenting a process for potential future use for identifying, assessing, and mitigating the risk of removing an incomplete initiative from the list would provide DHS with an institutional record to better replicate, and therefore sustain, its information-sharing efforts. Moreover, defining the milestones that initiatives must achieve in order to be considered complete and determining what difference the initiatives are making in information sharing could help the board better track progress in implementing the initiatives, make any necessary course corrections, and make future investment decisions. Further, determining the specific capabilities certain programs must implement in order for DHS to achieve its 2015 vision and subsequently tracking annual progress toward achieving these capabilities could help DHS prioritize programs and investments, and track and assess progress toward meeting homeland security responsibilities. We recommend that the Secretary of Homeland Security take the following five actions. To address information-sharing gaps and risks, direct the Information Sharing and Safeguarding Governance Board to, in either its Roadmap Implementation Guide or other related policies and procedures, document its processes for identifying information-sharing gaps document and implement a process for analyzing the root causes of those gaps; and establish and document processes for identifying and assessing risks of removing initiatives from the list, as well as determining whether other initiatives or alternative solutions are needed to mitigate any significant risks related to the relevant information- sharing gap. To improve DHS’s ability to track and assess key information-sharing initiatives, direct the Information Sharing and Safeguarding Governance Board to incorporate into the board’s existing tracking process milestones with time frames that initiatives must achieve to be considered complete, where feasible, and information to show the impact initiatives are having on information sharing, and direct the Information Sharing and Safeguarding Governance Board and the Office of the CIO to include in the mechanism the board is developing to track programs’ achievement of key capabilities the specific capabilities certain programs must implement in order to achieve the department’s 2015 information- sharing vision. We provided a draft of this report to DHS, ODNI, and the FBI on August 14, 2012, for review and comment. On September 5, 2012, DHS provided written comments, which are reprinted in appendix II. In commenting on the report, DHS stated that it concurred with all five recommendations and identified actions taken or planned to implement them. DHS concurred with the first recommendation, to direct the Information Sharing and Safeguarding Governance Board to document its processes for identifying information-sharing gaps and the results. DHS stated that the department, through the board, has recently initiated an effort to draft a DHS-wide Information Sharing and Safeguarding Implementation Plan. The implementation plan is to ensure that DHS’s sharing and safeguarding activities align with the forthcoming Fiscal Year 2012–2017 DHS Information Sharing and Safeguarding Strategy. DHS stated that the templates that the department will use to develop the implementation plan will identify information-sharing and -safeguarding gaps and the anticipated results. DHS also plans to update its Roadmap Implementation Guide to provide the department with an institutional record to better replicate, and therefore sustain, ongoing and future implementation efforts to improve information-sharing and -safeguarding. DHS also concurred with the second recommendation, to direct the Information Sharing and Safeguarding Governance Board to document and implement a process for analyzing the root causes of those gaps. DHS stated that the templates that the department will use to develop the implementation plan will identify the specific root causes of information- sharing and -safeguarding gaps for the initiatives contained in the implementation plan. DHS also plans to update its Roadmap Implementation Guide to document the processes by which it identifies the root causes of the gaps. DHS stated that this effort will better ensure that the department invests in the correct information-sharing solutions and effectively reduces risks. DHS concurred with the third recommendation, to direct the Information Sharing and Safeguarding Governance Board to establish and document processes for identifying and assessing risks of removing initiatives from the list, as well as determining whether other initiatives or alternative solutions are needed to mitigate any significant risks related to the relevant information-sharing gap. DHS stated that it plans to establish and document such processes, and also plans to update its Roadmap Implementation Guide to document the processes by which it identifies and assesses risks. DHS stated that preliminary planning to address this recommendation has begun. DHS concurred with the fourth recommendation, to direct the Information Sharing and Safeguarding Governance Board to incorporate into the board’s existing tracking process milestones with time frames that initiatives must achieve to be considered complete, where feasible, and information to show the impact initiatives are having on information sharing. DHS stated that the board will incorporate the recommended changes into its tracking process, and that preliminary planning to address this recommendation has begun. DHS also concurred with the fifth recommendation, to direct the Information Sharing and Safeguarding Governance Board and the Office of the CIO to include in the mechanism the board is developing to track programs’ achievement of key capabilities the specific capabilities certain programs must implement in order to achieve the department’s 2015 information-sharing vision. DHS stated that the board and the Office of the CIO will include the recommended changes in the mechanism, and stated that preliminary planning to address this recommendation has begun. If fully implemented, DHS’s planned efforts will address the intent of the five recommendations. DHS and the FBI also provided us with technical comments, which we considered and incorporated in the report where appropriate. ODNI did not have comments on the draft report. We are sending copies of this report to the Secretary of Homeland Security, the Director of National Intelligence, the Attorney General, and appropriate congressional committees. This report is also available at no charge on GAO’s web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512- 6510 or larencee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff acknowledgments are provided in appendix III. Our reporting objectives were to review the extent to which the Department of Homeland Security (DHS) (1) has made progress since 2010 in achieving its information-sharing mission, and what related challenges exist, if any, and (2) tracks and assesses information-sharing improvements. To determine the extent to which DHS has made progress in achieving its information-sharing mission, we analyzed relevant strategic planning documents, such as DHS’s January 2011 Integrated Strategy for High Risk Management, the DHS Information Sharing Strategy, the 2007 National Strategy for Information Sharing, and the Office of Intelligence & Analysis (I&A) Strategic Plan 2011-2018. In addition, to determine the extent of DHS leadership’s demonstrated commitment to information sharing, we analyzed documents related to DHS’s governance structure for information sharing, including charters that are current as of September 2012 and meeting minutes for relevant governing bodies from January 2011 through April 2012. To determine the extent to which DHS has developed information-sharing plans and identified key efforts, we analyzed documents related to DHS’s plans and initiatives for sharing, such as DHS’s list of key information- sharing initiatives, and analyzed documents from one initiative—the Law Enforcement Information Sharing Initiative (LEISI)—which is led by DHS’s U.S. Immigration and Customs Enforcement (ICE). We selected this initiative as an example case study of DHS’s actions related to information-sharing initiatives because it is a priority initiative and an established program. To determine the extent to which DHS’s other key information-sharing initiatives have made progress, we analyzed DHS documents tracking those initiatives. To determine the extent to which DHS has the resources needed to achieve its information-sharing mission, we analyzed documents related to DHS’s budget, including the DHS fiscal year 2013 Budget in Brief, and the funding status of key information-sharing initiatives. To determine the extent to which DHS has the technology needed for information sharing, we analyzed documents related to DHS’s technology framework for information sharing, such as the Information Sharing Segment Architecture Transition Plan, among other things. In addition, we interviewed program officials within DHS’s I&A to obtain information on the department’s information-sharing mission, goals, programs, activities, and funding; the Segment Architecture; efforts to improve terrorism-related information sharing; and related challenges. We interviewed ICE officials about LEISI’s progress and their experiences working with I&A on improving DHS’s information sharing. To determine the progress DHS has made on the technology framework for information sharing and on the funding of information-sharing programs, we interviewed officials from DHS’s Office of the Chief Information Officer (CIO). We assessed DHS’s plans and efforts against Standards for Internal Control in the Federal Government and criteria that we use in assessing high-risk issues. We also reviewed DHS’s efforts related to its Segment Architecture against our prior report and federal guidance on defining architecture content. To determine the extent to which DHS tracks and assesses information- sharing improvements, we analyzed relevant strategic planning documents, such as the I&A Strategic Plan for fiscal years 2011-2018 and the February 2010 Quadrennial Homeland Security Review (QHSR). Furthermore, to determine how DHS tracks progress and results in its information-sharing initiatives, we analyzed documentation and examples of DHS’s tracking mechanisms for its information-sharing efforts. We analyzed documentation and data on DHS’s performance measures for fiscal years 2011 and 2012 to determine the extent to which DHS is monitoring the effectiveness of information sharing. We also used these DHS performance measurement data to determine if DHS could demonstrate progress in information sharing by analyzing data for customer feedback and customer information needs, among other areas. To assess the reliability of the data obtained from DHS, we analyzed performance measurement documentation and interviewed officials knowledgeable about the controls over the integrity of the data. On the basis of our assessments, we determined that the performance measurement data were sufficiently reliable for the purposes of this report. In addition, we interviewed program officials within I&A and from DHS’s Office of the CIO on I&A’s and DHS’s progress in sharing terrorism-related information, and on mechanisms they use to monitor effectiveness. To supplement the steps we took to assess how DHS tracks and assesses information-sharing improvements, we also obtained information from various customers of DHS’s information sharing on the usefulness of I&A and other DHS components’ products. Specifically, we obtained information from 10 of 77 fusion center customers, 1 of 7 DHS operational components who participate in the DHS Intelligence Enterprise, and 2 of DHS’s 16 intelligence community customers. We interviewed or received written input from directors and other senior officials from 10 fusion centers—where states and major urban areas collaborate with federal agencies to improve information sharing— including the President of the National Fusion Center Association. The national network of fusion centers is the hub of much of the two-way intelligence and information flow between the federal government and state, local, tribal and territorial partners, making fusion centers key customers of I&A’s intelligence reports. Because we selected a nonprobability sample of fusion centers to contact, the information we obtained from these locations may not be generalized to all fusion centers nationwide. However, because we selected these centers based on, among other things, geographic dispersion and variation in risk based on the Department of Justice’s (DOJ) 25 Cities Project, the information we gathered from these locations provided us with an understanding of similarities and differences in fusion centers’ satisfaction with DHS’s information sharing across different centers. We interviewed ICE officials from the Homeland Security Investigations and Intelligence office and officials from the Office of the Director of National Intelligence’s (ODNI) National Counterterrorism Center (NCTC). Further, we received written input from two headquarters divisions of the Federal Bureau of Investigation (FBI) that are responsible for sharing terrorism-related information. We selected ICE, ODNI, and the FBI because they are key customers of DHS’s intelligence products or partner with I&A to create these products. ICE is a DHS component that shares terrorism-related information and leads two of DHS’s key information-sharing initiatives. ODNI and the FBI are federal agencies that have key roles in analyzing terrorism threats to the United States and jointly issue products with DHS. The FBI also has the primary role in carrying out investigations within the United States of threats to national security. The views of ICE, ODNI, and the FBI are not generalizable to all of DHS’s federal customers, but they provided us with a general understanding of the perspectives about DHS’s information sharing held by different customer types nationwide. To supplement these views, we reviewed our prior work on DHS customer satisfaction and analyzed a report from a survey on information sharing conducted by the George Washington University Homeland Security Policy Institute and discussed the report with a representative who conducted the survey. In January and February 2012, the institute administered a 78 question self-completion survey to individuals working in 72 state and major urban area fusion centers, and 71 individuals voluntarily took the survey. On average, 48 to 49 individuals answered each question. Our analysis included reviewing the methodology and assumptions of the study, and discussing the study’s scope and conclusions with the George Washington University Homeland Security Policy Institute. As a result of our review and analysis, we determined that the study and its results were appropriate for use in our report. We assessed DHS’s mechanisms to track and assess information-sharing improvements against criteria for practices in program management. We conducted this performance audit from November 2011 through September 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, David A. Powner (Director), Eric Erdman (Assistant Director), Anh Le (Assistant Director), Paul A. Hobart, Karl W. Seifert, Rebecca Kuhlmann Taylor, and Ashley D. Vaughan made significant contributions to the report. Also contributing to this report were Virginia A. Chanley, Tracy J. Harris, Eric D. Hauswirth, Kevin J. Heinz, Lisa Humphrey, Jeff R. Jensen, Justine C. Lazaro, Thomas Lombardi, Jan B. Montgomery, Jessica S. Orr, Anthony K. Pordes, and William M. Reinsberg.
Recent planned and attempted acts of terrorism on U.S. soil underscore the importance of the need to ensure that terrorism-related information is shared with stakeholders across all levels of government in an effective and timely manner. DHS, through its Office of Intelligence and Analysis, has responsibility for sharing this information and has established an information-sharing vision for 2015—which includes ensuring that the right information gets to the right people at the right time. GAO was asked to examine the extent to which DHS (1) has made progress in achieving its information-sharing mission, and (2) tracks and assesses information-sharing improvements. GAO analyzed relevant DHS documents, such as strategic planning documents and those related to DHS’s governance structure, among others, and interviewed DHS officials. The Department of Homeland Security (DHS) has made progress in achieving its information-sharing mission, but could take additional steps to improve its efforts. Specifically, DHS has demonstrated leadership commitment by establishing a governance board to serve as the decision-making body for DHS information-sharing issues. The board has enhanced collaboration among DHS components and identified a list of key information-sharing initiatives. The board has also developed and documented a process to prioritize some of the initiatives for additional oversight and support. However, because DHS has not revised its policies and guidance to include processes for identifying information-sharing gaps and the results; analyzing root causes of those gaps; and identifying, assessing, and mitigating risks of removing incomplete initiatives from its list, it does not have an institutional record that would help it replicate and sustain those information-sharing efforts. Overall, DHS’s key information-sharing initiatives have progressed, and most have met interim milestones. However, progress has slowed for half of the 18 key initiatives, in part because of funding constraints. For example, 5 of DHS’s top 8 priority information-sharing initiatives currently face funding shortfalls. The board has not been able to secure additional funds for these initiatives because they ultimately compete for funding within the budgets of individual components, but DHS officials noted that the board’s involvement has kept some initiatives from experiencing funding cuts. DHS is also developing plans that will be important in managing its information-sharing efforts, such as a revised strategy for information sharing and a related implementation plan. DHS has taken steps to track its information-sharing efforts, but has not yet fully assessed how they have improved sharing. Specifically, DHS is tracking the implementation progress of key information-sharing initiatives, but the department does not maintain completion dates and does not fully assess the impact initiatives are having on sharing. Determining and documenting initiative completion dates and how initiatives affect sharing, where feasible, would help the board better track progress in implementing the initiatives and make any necessary course corrections if completion dates are delayed. Further, DHS has begun to assess the extent to which its technology programs, systems, and initiatives—which include the key information-sharing initiatives—have implemented critical information-sharing capabilities, such as secure user access authorization. However, DHS has not yet determined the specific capabilities each particular program must implement for DHS to conclude that it has improved information sharing enough to achieve its information-sharing vision for 2015. Establishing the level of capabilities programs must implement could help DHS prioritize programs, and track and assess progress toward its vision. In addition, DHS is in the process of implementing customer feedback measures on the usefulness of information provided and has taken steps to assess customers’ information needs. DHS has not yet developed measures that determine the impact of its information-sharing efforts on homeland security, but plans to develop ways to assess information-sharing results toward achieving its 2015 vision. DHS’s time frames for completing this effort are to be included in forthcoming plans currently being developed. GAO recommends that DHS revise its policies and guidance to include processes for identifying information-sharing gaps, analyzing root causes of those gaps, and identifying, assessing, and mitigating risks of removing incomplete initiatives from its list; better track and assess the progress of key information-sharing initiatives; and establish the level of capabilities programs must implement to meet its vision for 2015. DHS agreed with these recommendations and identified actions taken or planned to implement them.
The Navy’s delinquency rate was slightly lower than the Army’s, which is the highest delinquency rate in the federal government. Cumulative Navy charge-offs since the inception of the Bank of America travel card program in November 1998 were nearly $16.6 million. As discussed in further details in the following sections of this report, weaknesses in the Navy’s overall control environment and a lack of front-end controls over travel card issuance and use exacerbated the Navy’s delinquency problems. Without proper management control, demographics such as the age and pay rates of Navy personnel also contributed to delinquencies and charge-offs. These problems have led to contract modifications with Bank of America that resulted in the Navy, the federal government, and the taxpayers losing millions of dollars in rebates, higher fees, and substantial resources spent pursuing and collecting on past due accounts. DOD and the Navy have taken a number of positive actions to address the Navy’s high delinquency and charge-off rates, and results from the first half of fiscal year 2002 showed a significant drop in charged-off accounts. Most of this reduction could be attributed to a salary and military retirement offset program, which began in November 2001. DOD and the Navy also encouraged cardholders to voluntarily use the split disbursement payment process (split disbursements) to direct that a portion or all of their reimbursements be sent directly to the bank for payment of their travel card bills. The Navy also increased management attention and focus on the delinquency issue. However, except for split disbursements, the Navy’s actions primarily address the symptoms of delinquency and charge-offs after they had already occurred. Control weaknesses remain in the front- end management of the travel card program, such as issuing the cards and overseeing the proper use of the cards. Over the last 2 years, the Navy’s delinquency rate fluctuated from 10 to 18 percent and on average was 5.6 percentage points higher than other non- Army DOD components and 6 percentage points higher than non-DOD federal civilian agencies. As of March 31, 2002, over 8,000 Navy cardholders had collectively $6 million in delinquent debt. As discussed below, the nature of the Navy’s mission, which requires personnel in certain Navy commands to travel often for training and preparation for deployment, contributes, at least in part, to the Navy’s high delinquency rate. Figure 1 compares delinquency rates among the Navy, Army, other DOD, and the 23 largest civilian agencies. Since Bank of America took over the DOD travel card contract on November 30, 1998, Bank of America has charged off over 13,800 Navy travel card accounts with nearly $16.6 million of bad debt. Table 1 provides a comparison of cumulative charge-offs, recoveries, and delinquencies by military service as of March 31, 2002. Our analysis showed a correlation between certain demographic factors and high delinquency and charge-off rates. Available data showed that the travel cardholder’s rank or grade (and associated pay) is a strong predictor of delinquency problems. As shown in figure 2, the Navy’s delinquency and charge-off problems are primarily associated with low- and midlevel enlisted military personnel grades E-1 to E-6, with relatively low incomes and little experience in handling personal finances. Available data indicate that military personnel grades E-1 (seaman recruit in the Navy or private in the Marine Corps) to E-6 (petty officer first class in the Navy or staff sergeant in the Marine Corps) account for about 78 percent of all Navy military personnel. These enlisted military personnel have basic pay levels ranging from $12,000 to $27,000. These individuals were responsible for 40 percent of the total outstanding Navy travel card balances as of September 30, 2001. Figure 3 compares the delinquency rates by military rank and civilian personnel to the Navy’s average delinquency rate as of September 30, 2001. As shown, the delinquency rates were as high as 34 percent for E-1 to E-3 military personnel and 20 percent for E-4 to E-6 military personnel, compared to the Navy’s overall delinquency rate of 12 percent. These rates were markedly higher than the rates for officers, which ranged from a low of 1 percent for O-7 to O-10 (admirals in the Navy or generals in the Marine Corps) to a higher 8 percent for O-1 to O-3 (ensign to lieutenant in the Navy or second lieutenant to captain in the Marine Corps). These rates were also substantially higher than that of Navy civilians, which at 5 percent was comparable with the federal civilian agencies rate shown in figure 1. The delinquency rate of military personnel E-4 to E-6 in particular had an important negative impact on the Navy’s delinquency rate. Specifically, these are petty officers in the Navy and corporals to staff sergeants in the Marine Corps. Pay levels for these personnel, excluding supplements such as housing, ranged from approximately $18,000 to $27,000. These individuals also traveled often. As shown by Bank of America data, personnel E-4 to E-6 accounted for 36 percent of the total Navy outstanding balance, which was higher than the outstanding balance of all other military and civilian personnel. This combination of high outstanding balance and high delinquency rate largely explained the high Navy delinquency rate. As shown in figure 4, charged-off amounts for military personnel grades E-1 to E-6 during fiscal year 2001 totaled more than $3.6 million. This represented 72 percent of the almost $5 million in total Navy charge-offs during fiscal year 2001. According to Navy representatives, these individuals often had little experience handling personal resources. Although their basic pay rates are supplemented with housing and food allowances, the low salaries may not permit payment of excessive personal charges on travel cards. If these individuals get into financial difficulty, they have fewer resources at their disposal to pay their travel card balances in full every month. Also, if cardholders in these lower grade levels do not receive their travel card reimbursements promptly because of either delays in filing their vouchers or voucher processing, they may lack the financial resources to make timely payments on their travel card accounts. In addition, as discussed later in this report, the Navy did not exempt personnel with poor credit histories from required use of travel cards. Consequently, these low- and midlevel enlisted military personnel are often issued travel cards even though some may already be in serious financial trouble and, therefore, may not have been appropriate credit risks. Lack of adequate training and the failure to adequately monitor travel card use may also have exacerbated the delinquency rates for these individuals. Navy delinquency rates also varied widely across commands. Table 2 shows the outstanding balance and delinquency rates of major Navy commands as of March 31, 2002. As shown, the delinquency rates as of March 31, 2002, ranged from 22 percent for the Naval Reserve Force to as low as 2 percent for four commands, including the Naval Air Systems Command. Table 2 also shows that high credit card activity was not necessarily associated with high delinquency rates. In fact, some Navy commands with high credit card activity also had low delinquency rates. The six major commands with the highest delinquency rates—ranging from 22 to 12 percent—as of March 31, 2002, were the Naval Reserve Force, the U.S. Atlantic Fleet, the U.S. Pacific Fleet, U.S. Marine Corps Forces Pacific, U.S. Marine Corps Forces Atlantic, and Marine Forces Reserve. Navy officials expressed the belief that demographics and logistics were important contributing factors to these high delinquency rates. According to Navy officials, Atlantic and Pacific fleet personnel, as well as Marine Corps Forces Atlantic and Pacific, travel frequently for training and preparation for deployment. Because they are always on the move, these individuals might not be filing vouchers and making payments in a timely manner. In addition, fleet personnel often consist of low- and midlevel recruits, demographics which, as discussed previously, are a contributing factor to the high delinquency rate. Navy officials attributed the delinquency problems with the reserve forces to logistics of a different kind. Reserve forces are spread throughout the country and report to duty only once a month. Reservists typically fill out their vouchers when they return home and then mail them to the processing centers, sometimes weeks after the training. According to Navy officials, the high delinquency rates in the reserve forces could be attributed partly to the fact that some had not received travel reimbursement by the time their bills became delinquent. In contrast, some commands, such as Naval Sea Systems Command and Naval Air Systems Command, had large numbers of travel card accounts and high travel card activity, yet low delinquency rates. According to Navy officials, this is because personnel in these commands are typically civilians, are older and more mature, and therefore are better at managing their finances. These demographic factors, coupled with the fact that these sites typically have full-time APCs and a better control environment, may explain why their delinquency rates are lower than the Navy average, and sometimes even lower than the average rate for federal civilian agencies. The case study sites we audited followed the pattern described above. For example, at Camp Lejeune, a principal training location for Marine air and ground forces, over one-half of the cardholders are enlisted personnel. Representative of the Navy’s higher delinquency rate, Camp Lejeune’s quarterly rates over the 18 months ending March 31, 2002, averaged over 15 percent. As of March 31, 2002, the delinquency rate at this site was nearly 10 percent. In contrast, at Puget Sound Naval Shipyard, where the mission is to repair and modernize Navy ships, civilian personnel earning more than $38,000 a year made up 84 percent of total government travel cardholders and accounted for 86 percent of total fiscal year 2001 travel card transactions. This site’s delinquency rate had declined to below 5 percent as of March 31, 2002. High delinquencies and charge-offs have resulted in increased costs to the Navy. In fiscal year 2001, DOD entered into an agreement with Bank of America to adjust the terms of its travel card contract. DOD agreed to increased fees and a change in rebate calculation. These changes have cost the Navy an estimated $1.5 million in lost rebates on combined individually and centrally billed accounts in fiscal year 2001 alone and will cost, in addition, about $1.3 million in automated teller machine (ATM) fees annually. Other costs, such as the administrative burden of monitoring delinquent accounts, are harder to measure, but no less real. For example, employees with delinquent accounts must be identified, counseled, and disciplined, and their account activity must be closely monitored. In addition, employees with financial problems who have access to sensitive data may pose a security risk, as discussed later in this report. Unexpectedly high defaults by DOD’s travel cardholders, including the Navy’s, resulted in a 5-month legal dispute with Bank of America over the continuation of the travel card contract. In 1998, under the provisions of the General Services Administration’s (GSA) master contract with Bank of America, DOD entered into a tailored task order with Bank of America to provide travel card services for a period of 2 years, ending November 29, 2000. Under the terms of the task order, DOD had three 1-year options to unilaterally renew the contract. On September 29, 2000, prior to the expiration of the initial task order, DOD gave notice to Bank of America that it intended to exercise its option to extend the task order for an additional year. In November 2000, Bank of America contested the provisions of the DOD task order with the GSA contracting officer. Bank of America claimed that the task order was unprofitable because of required “contract and program management policies and procedures” associated with higher-than-anticipated credit losses, because an estimated 43,000 DOD employees had defaulted on more than $59 million in debts. Consequently, in April 2001, the master contract and the related DOD- tailored task order for travel card services were renegotiated. Specifically, Bank of America was able to reduce its financial risk by instituting additional fees, such as higher cash advance and late payment fees; offsetting credit losses against rebates, as explained later; facilitating the collection of delinquent and charged-off amounts through salary and military retirement pay offset; and participating in split disbursements, in which the government sends part or all of the travel voucher reimbursements directly to Bank of America. One of the terms of the renegotiated task order between Bank of America and DOD was that, effective August 10, 2001, the travel card cash advance fee would be increased from 1.9 percent to 3 percent, with a minimum fee of $2. The Navy reimburses all cash advance fees related to authorized cash withdrawals. We estimate that this contract modification will result in approximately $1.3 million of increased costs to the Navy each year. Our estimate was made by applying the new fee structure that went into effect in August 2001 to cash advances made throughout fiscal year 2001 to ascertain how much more Bank of America would have charged. Other fee increases agreed to in the renegotiation, such as the fee for expedited travel card issuance, will also result in additional costs to the Navy. The GSA master contract modification also changed the rebate calculation, making it imperative that the Navy improve its payment rates to receive the full benefits of the program. Under the GSA master contract, credit card companies are required to pay a quarterly rebate, also known as a refund, to agencies and GSA based on the amount charged to both individually billed and centrally billed cards. The rebate to the agency is reduced, or eliminated, if significant numbers of an agency’s individual cardholders do not pay their accounts on time. Specifically, credit losses or balances that reach 180 calendar days past due reduce the rebate amounts. Effective January 2001, the contract modification changed the way that rebates are calculated and how credit losses are handled. If the credit loss of an agency’s individually billed travel card accounts exceeds 30 basis points— or 30 one-hundredths of a percent (.003)—of net sales on the card, the agency is assessed a credit loss fee, or rebate offset, against the rebate associated with both individually billed and centrally billed travel card accounts. This credit loss fee, or rebate offset, which resulted solely from individually billed account losses, significantly affected the amount of rebates that the Navy received as a result of combined individually and centrally billed net sales in fiscal year 2001. In fiscal year 2000, the Navy received approximately $2.0 million in rebates from the travel card program. In contrast, in fiscal year 2001, the Navy collected only about $800,000 of the $2.3 million in rebates that we estimated it would have received, based on fiscal year 2001 net sales, had individually billed account payments been timely. This is due to a contract modification in January 2001, which changed the way rebates were calculated. In fact, during the first quarter of fiscal year 2001, the Navy collected almost $470,000 in total rebates from Bank of America. However, rebates for the last three quarters affected by the contract change had dwindled to $351,000. The Navy has taken a number of positive actions to address its high delinquency and charge-off rates, and results from the first half of fiscal year 2002 showed a significant drop in charged-off accounts. Most of this reduction may be attributed to a salary and military retirement payment offset program—similar to garnishment—started in November 2001. Other Navy actions included increasing the use of split disbursements, in which Navy disburses a portion of a travel reimbursement directly to the bank (instead of sending the entire amount of the reimbursement to the cardholder), and increased management attention and focus on delinquency. Except for split disbursements, the actions primarily addressed the symptoms, or back-end result, of delinquency and charge- offs after they have already occurred. As noted in the remaining sections of this report, the Navy has significant control weaknesses, particularly with respect to the front-end management of the travel card program, such as issuing the cards and overseeing their proper use, which it has not yet effectively addressed. As shown in figure 5, the amount of charge-offs has decreased substantially at the same time that recoveries have increased. At the start of fiscal year 2001, the charge-off balance greatly exceeded the recovery amount. Starting in the third quarter of fiscal year 2001, the amount charged off started to decline and by the quarter ended December 31, 2001, the amount charged off was about the same as the recovery amount. By March 31, 2002, recoveries for the first time exceeded the charged-off amount. Starting in fiscal year 2002, DOD began to offset the retirement benefits of military retirees and the salaries of certain civilian and military employees against the delinquent and charged-off balances on travel card accounts. The DOD salary offset program implements a provision of the Travel and Transportation Reform Act of 1998 (TTRA) that allows any federal agency, upon written request from the travel card contractor, to collect by deduction from the amount of pay owed to an employee (or military member) any amount of funds the employee or military member owes on his or her travel card as a result of delinquencies not disputed by the employee. The salary and military retirement offset program was implemented DOD-wide. DOD’s offset program came into being as part of the task order modification. From April to August 2001, DOD and Bank of America worked together to establish program protocols. Starting in August 2001, Bank of America sent demand letters to cardholders whose accounts were more than 90 days delinquent. The Defense Finance and Accounting Service (DFAS) processed the initial offsets of delinquent accounts in October 2001 in the various DOD pay systems. The first deductions were made from the November pay period and paid to Bank of America starting in December 2001. Bank of America can also use the offset program to recover amounts that were previously charged off. January 2002 was the first month in which Bank of America requested offsets for accounts that had already been charged off. The offset program works as follows. When an account is 90 days delinquent, Bank of America is to send a demand letter to the individual cardholder requesting payment in full within 30 days. The demand letter specifies that salary offsets will be initiated if payment is not made in full within 30 days. The cardholder may negotiate an installment agreement or dispute the charges with the bank. The cardholder has a right to review all records such as invoices and to request a hearing if the bank’s disposition of the dispute is not satisfactory. After the 30 days have elapsed, if payment is not made and the cardholder does not dispute the debt, the bank includes the account in the list of accounts requested for offset. Individuals in the following categories may not be accepted for offset. Civilian employees in bargaining units that have not agreed to the salary offset program cannot be accepted. According to a DFAS official, as of July 2002, 1,002 of 1,227 DOD bargaining units have agreed to participate in the program. Individuals with debts to the federal government or other garnishments already being offset at 15 percent of disposable pay are considered to be in protected status and are not eligible for the offset program. Individuals who cannot be located in the various payroll and military retirement (i.e., active, reserve, retired military, or civilian) systems cannot be accepted for offset. Civilian retirees were not subject to offset during the period covered by our audit. The authorizing statutes for both the Civil Service Retirement System and the Federal Employees Retirement System specify that retirement benefits may be offset only to the extent expressly authorized by federal statutes. TTRA, Section 2, provides authority to offset salaries of “employees” of agencies but does not provide such authority for civilian employee retiree annuitants. However, Public Law 107-314 authorizes the Secretary of Defense to offset delinquent travel card debt against the retirement benefits of DOD civilian retirees. Once an individual is accepted for offset, the related debt is established in the appropriate pay system and DFAS can deduct up to 15 percent of disposable pay. Disposable pay is defined in GSA’s Federal Travel Regulation as an employee’s compensation remaining after the deduction from an employee’s earnings of any amounts required by law to be withheld (e.g., tax withholdings and garnishments). The amounts collected are paid to the bank monthly for military personnel and retirees and biweekly for civilian personnel. According to DFAS, from October 2001 through July 2002, Bank of America referred 53,462 DOD-wide cases with debt of approximately $77.5 million to DOD for offset. DOD accepted and started offset for 74 percent of the cases and 69 percent of the debt amounts referred. The number and debt amount of Navy-specific cases forwarded by Bank of America were not available. From November 2001 through July 2002, DFAS collected approximately $5.2 million from active and retired Navy military personnel through the offset program. Although DFAS was unable to break down the amount of civilian offset by military service, the amount collected from all DOD employees was $1.6 million during the same period. The salary and retirement offset program is expected to continue to reduce the amount of accounts that need to be charged off, at the same time increase the amount of recoveries. DOD has recently encouraged cardholders to make greater use of split disbursements, a payment method by which cardholders elect to have all or part of their reimbursement sent directly to Bank of America. A standard practice in many private sector companies, split disbursements have the potential to significantly reduce delinquencies. However, during the period covered by our audit no legislative authority existed requiring the use of split disbursements by Navy employees. This practice was voluntary, resulting in a low participation rate. As shown by Bank of America data, only 14 percent of fiscal year 2001 travel card payments were made using this method. Although payments made through split disbursements have increased during the first three quarters of fiscal year 2002, they made up only 25 percent of all travel card payments. Our report on the Army travel card program included a matter for congressional consideration that would authorize the Secretary of Defense to require that employees’ travel allowances be used to pay the travel card issuers directly for charges incurred using the travel card. We believe that this action would help to reduce DOD’s travel card delinquency and charge- off rates. Public Law 107-314 authorized the Secretary of Defense to require split disbursement for all DOD travel cardholders. The Navy has also initiated actions to improve the management of travel card usage. The Navy’s three-pronged approach to address travel card issues is as follows: (1) providing clear procedural guidance to APCs and travelers, available on the Internet, (2) providing regular training to APCs, and (3) enforcing proper use and oversight of the travel card through data mining to identify problem areas and abuses. Noting that the delinquency rates for many Navy commands still exceeded the Navy’s established goal of no more than 4 percent, the Assistant Secretary of the Navy, Financial Management and Comptroller, in April 2002 issued a memorandum on travel card control procedures and policies. This memorandum addressed a number of travel card issues, including (1) requiring that the travel card be deactivated when employees are separated from the service, (2) changing the definition of infrequent travel to traveling four times or less a year, (3) lowering the delinquency goal to 4 percent, (4) deactivating all cards whenever the cardholders are not scheduled for official travel, and (5) requiring spot checks for travel card abuse. The Assistant Secretary also required all units with delinquency rates higher than 4 percent to take immediate actions to lower the delinquency rates and to report on these results within 30 days of receiving the memorandum. Further, the DOD Under Secretary of Defense (Comptroller) created a DOD-wide Charge Card Task Force in March 2002 to address management issues related to DOD’s purchase and travel card programs. The task force issued its final report on June 27, 2002. We have reviewed the report and believe that many of the actions proposed by the task force will improve the controls over the travel card program. Important task force recommendations include canceling inactive accounts and expanding the salary offset program. However, actions to implement additional front-end or preventive controls, such as strengthening the critical role of the APCs and denying cards to individuals with prior credit problems, were not addressed in the report. We believe that strong preventive controls will be critical if DOD is to effectively address the high delinquency rates and charge-offs, as well as the potentially fraudulent and abusive activity discussed in this report. Our review identified numerous instances of potentially fraudulent and abusive activity associated with the Navy’s travel card program during fiscal year 2001 and the first 6 months of fiscal year 2002. For purposes of this report, cases where cardholders wrote three or more NSF checks or wrote checks on closed accounts to pay their Bank of America bill were characterized as potentially fraudulent. We considered abusive travel card activity to include (1) personal use of the cards—any use other than for official government travel—regardless of whether the cardholders paid the bills and (2) cases in which cardholders were reimbursed for official travel and then did not pay Bank of America, thus benefiting personally. In addition, some of the travel card activity that we categorized as abusive may be fraudulent if it can be established that the cardholders violated any element of federal or state criminal codes. Failure to implement controls to reasonably prevent such transactions can increase the Navy’s vulnerability to additional delinquencies and charge- offs. Our review identified numerous examples of potentially fraudulent activity where the cardholders wrote checks against closed checking accounts or repeatedly wrote NSF, or “bounced,” checks as payment for their travel card accounts. Knowingly writing checks against closed accounts or writing three or more NSF checks may be bank fraud under 18 U.S.C. 1344. Further, it is a violation of the Uniform Code of Military Justice (UCMJ) article 123a when a soldier makes, draws, or utters (verbally authorizes) a check, draft, or order without sufficient funds and does so with intent to defraud. During fiscal year 2001 and the first 6 months of fiscal year 2002, in total over 5,100 Navy cardholders wrote NSF checks, or made NSF payments by phone, as payment to Bank of America for their travel card bills. Of these, over 250 might have committed bank fraud by writing three or more NSF checks to Bank of America during either fiscal year period. Table 3 shows the 10 cases we selected for review where the cardholders wrote three or more NSF checks to Bank of America, and their accounts were charged off or placed in salary offset or another fixed pay agreement due in part to repeated use of NSF checks. We have referred the cases in which potential bank fraud has occurred to the Navy Criminal Investigation Service for further review. The 10 cardholders in table 3 wrote a total of 107 checks that were returned by Bank of America because they were NSF, drawn on closed accounts, and/or had payments stopped for other reasons. These checks totaled over $211,000. Eight of the 10 cardholders had significant credit problems prior to card issuance, such as bankruptcies, charged-off credit card accounts, accounts in collection, and serious delinquencies. Two of the cardholders did not have credit problems prior to card issuance; however, one of these two experienced serious financial problems after issuance of the Bank of America travel card. The following provides illustrative detailed information on two of these cases. Cardholder #1 was a petty officer second class with the U.S. Pacific Fleet in Honolulu. The cardholder wrote 12 NSF checks totaling more than $61,000 for payment on his Bank of America travel card account. These checks were written partly to cover charges incurred while on official travel, but records showed that the cardholder made many more charges at convenience stores, restaurants, gas stations, and travel agencies in the vicinity of his hometown. An examination of the cardholder’s credit history also revealed that, prior to receiving his government travel card in May 2000, the cardholder had multiple charge- offs, in addition to filing personal and business bankruptcies. Despite his financial history, the cardholder was issued a standard card, instead of a restricted card with a lower credit limit. From March 2001 through December 2001, the cardholder wrote about one NSF check a month, with three of these NSF checks, totaling more than $12,500, written in the month of December 2001 alone. Financial industry regulations require that an account be credited immediately upon receipt of a check. Consequently, when Bank of America posted the NSF checks, the account appeared to have been paid, which provided credit to the cardholder to make additional purchases. Thus, by writing NSF checks, and submitting NSF payments over the phone, which Bank of America had to credit to his travel card account, the petty officer was able to, in effect, increase his credit limit to more than $20,000—a practice known as “boosting.” He used each of these successive increases in his effective credit limit to charge additional items on his travel card. Thus, despite the repeated NSF checks written throughout 2001, the individual was able to continue making charges through December 2001. The cardholder’s APC did not know of the NSF check problems until Bank of America notified him of the fact. Because the cardholder was considered a good sailor, he was given administrative counseling for potential fraud and abuses related to his travel card. The terms of the administrative counseling specified that the cardholder would face an administrative discharge in case of continued abuse of the credit card or any other misconduct. Cardholder #5 was a petty officer (E-5) assigned to the Naval Reserve Forces in San Jose, California. Prior to receiving the Bank of America travel card in June 2000, the individual had a number of unpaid accounts with other creditors. The individual was given a restricted card, which should have been issued in “inactive” status and only activated when needed for travel. However, records showed that the cardholder was able to make about 130 separate purchases and ATM transactions in the vicinity of his hometown while not on official travel. These transactions totaled more than $5,000. In addition, from September 2000 through December 2001, the cardholder wrote eight NSF checks and one stop payment check totaling $20,052 to Bank of America. During fiscal year 2001, not a single valid payment was made to Bank of America for this account. The cardholder had an unpaid balance of $4,589 at the time his account was charged off in July 2002. The cardholder also had three other unrelated charge-offs to accounts other than the government travel card in July 2002. We found no documentation that disciplinary actions had been taken against the cardholder. The APC assigned to the cardholder told us that he had received little training for his APC responsibility, which is a collateral duty. He recalled advising the cardholder once to pay off his travel card balance. Although a Bank of America official informed us that access to NSF check information had been available to APCs since 2000, the APC said he was not aware of the NSF checks written by the cardholder. The APC also informed us that he was not aware that the cardholder’s account was charged off until he was notified by Bank of America. Despite having his Bank of America account charged off and other financial problems, the cardholder was recently promoted from petty officer second class (E-5) to petty officer first class (E-6). His account had been referred to salary offset. We found instances of abusive travel card activity by Navy cardholders that covered charges for a wide variety of personal goods and services, including prostitution, jewelry, gentlemen’s clubs, gambling, cruises, and tickets to sporting and other events. Further, we found abusive card activities where (1) cardholders who were reimbursed for official travel did not pay Bank of America and (2) cardholders used the card for personal charges and failed to pay Bank of America. We found that the government cards were used for numerous abusive transactions that were clearly not for the purpose of government travel. As discussed further in appendix II, we used data mining tools to identify transactions we believed to be potentially fraudulent or abusive based upon the nature, amount, merchant, and other identifying characteristics of the transactions. Through this procedure, we identified thousands of suspect transactions. Government travel cards were used for purchases in categories as diverse as legalized prostitution services, jewelry, gentlemen’s clubs, gambling, cruises, and tickets to sporting and other events. In addition, we found evidence that cardholders circumvented prescribed ATM procedures by obtaining cash at adult entertainment establishments. Table 4 illustrates a few of the types of abusive transactions and the amounts charged to the government travel card in fiscal year 2001 and the first 6 months of fiscal year 2002 that were not for valid government travel. The number of instances and amount shown include cases in which the cardholders paid the bills and where they did not pay the bills. We found that Navy cardholders used their government travel cards to purchase prostitution services. We arrived at this information by first identifying that two institutions frequented by Navy cardholders were legalized brothels in Nevada. Based on a price list provided by one of the brothels, we eliminated transactions that were most likely for bar charges and determined that 50 cardholders used their government travel card to purchase over $13,000 in prostitution services. These charges were processed by the brothels’ merchant bank, and authorized by Bank of America, in part because a control afforded by the merchant category code (MCC), which identifies the nature of the transactions and is used by DOD and other agencies to block improper purchases, was circumvented by the establishments. In these cases, the transactions were coded to appear as restaurant and dining or bar charges. For example, the merchant James Fine Dining, which actually operates as a brothel known as Salt Wells Villa, characterizes its services as restaurant charges, which are allowable and not blocked by the MCC control. According to one assistant manager at the establishment, this is done to protect the confidentiality of its customers. Additionally, the account balances for 11 of the 50 cardholders purchasing services from these establishments were later charged off or put into salary offset. For example, one sailor, an E-2 seaman apprentice, charged over $2,200 at this brothel during a 30-day period. The sailor separated from the Navy, and his account balance of more than $3,600 was eventually charged off. We also found instances of abusive travel card activity where Navy cardholders used their cards at establishments such as gentlemen’s clubs, which provide adult entertainment. Further, these clubs were used to convert the travel card to cash by supplying cardholders with actual cash or “club cash” for a 10 percent fee. For example, we found that an E-5 second class petty officer circumvented ATM cash withdrawal limits by charging, in a single transaction, $2,420 to the government travel card and receiving $2,200 in cash. Subsequently, the club received payment from Bank of America for a $2,420 restaurant charge. Another cardholder, an E- 7 chief petty officer, obtained more than $7,000 in cash from these establishments. For fiscal year 2001 and through March 2002, 137 Navy cardholders made charges totaling almost $29,000 at these establishments. These transactions represented abusive travel card use that was clearly unrelated to official government travel. The standard government travel card used by most Navy personnel is clearly marked “For Official Government Travel Only” on the face of the card. Additionally, upon receipt of their travel cards, all Navy cardholders are required to sign a statement of understanding that the card is to be used only for authorized official government travel expenses. However, as part of our statistical sampling results at three Navy locations, we estimated that 7 percent of fiscal year 2001 transactions at one site to 27 percent at another site were for purposes not related to official travel, and therefore, were abusive. Personal use of the card increases the risk of charge-offs related to abusive purchases, which are costly to the government and the taxpayer. Of the 50 cardholders who purchased prostitution services described above, 11 were later charged off or put into salary offset. As we discussed earlier in the report, charged-off and delinquent accounts resulted in contract modifications and other monitoring efforts, which have cost the Navy millions of dollars. Our work at three case study sites and our Navy-wide data mining identified numerous examples of abusive travel card use where cardholders failed to pay their travel card bills. This abusive activity included (1) authorized transactions incurred in conjunction with approved travel orders where the cardholders received reimbursement but did not pay the bills or (2) transactions incurred by cardholders that were not associated with approved travel orders. These accounts were subsequently charged off or placed in salary offset or other fixed pay agreement. In many cases, APCs, commanders, and supervisors did not effectively monitor travel card usage or take documented disciplinary actions against cardholders. Table 5 provides specific examples of cardholders who failed to pay their travel card bills. Eight of the 10 cardholders included in table 5 had significant credit problems prior to card issuance, such as charged-off credit card accounts, mortgage foreclosures, bankruptcies, serious delinquencies, unpaid accounts, and referrals to collection agencies. One cardholder had similar problems subsequent to issuance of the Bank of America travel card. The following provides illustrative detailed information on abusive activities for three of these cases. Cardholder #1 was a sergeant (E-5) with the U.S. Marine Corps Reserve assigned at Camp Lejeune. Despite a history of credit problems, which included several charged-off and delinquent commercial credit accounts, Bank of America issued the cardholder a standard card, with a credit limit of $10,000, in March 2000. The cardholder was deployed to Europe in August 2000 and his credit limit was increased to $20,000. Within a month of his deployment, the cardholder had charged $10,700 to the card, including $8,500 in ATM withdrawals. Although the cardholder received reimbursements for his travel expenses, he failed to settle his account in full. In December 2000, the cardholder informed the APC that his account was 30 days past due and promised to pay the full outstanding balance. He again failed to do so and his account balance of $11,467 went delinquent in January 2001. The APC did not deactivate the travel card account but put the cardholder in “mission critical” status as his tour in Europe was coming to a close. The cardholder’s credit limit was then raised to $25,000 to enable the cardholder to return to the United States. Consequently, when the account was closed on February 8, 2001, the outstanding balance had increased to $19,971. The APC admitted to us that he failed to carefully monitor this account. No disciplinary action was taken against the cardholder, who had returned to civilian life; however, judicial action against the cardholder is pending. We have referred this matter to DOD’s Office of Inspector General for appropriate action. In addition, our review indicated that the cardholder might have filed a fraudulent travel voucher in January 2001. This travel voucher claimed reimbursement for expenses in Germany over the holiday period from late December 2000 to early January 2001, allegedly for official purposes. However, Bank of America data showed that the government travel card belonging to this cardholder was used to make transactions in the vicinity of the traveler’s hometown during this holiday period. It appeared that the cardholder might have returned to the United States for the holiday, yet continued to claim expenses as if he was still in Germany, a potentially fraudulent act. Cardholder #3 was a petty officer third class (E-4) assigned to the LeMoore Naval Air Station in California. Our review indicated that the cardholder had numerous unpaid cable, medical, and communication accounts and serious delinquency of more than $5,000 on his personal credit card account prior to receiving the travel card. The unit to which the cardholder was assigned had a policy of activating the government travel card only when a cardholder travels. However, from February through April 2001, while not on travel, the cardholder purchased over $6,250 worth of electronic and computer equipment from Best Buy and various Web sites using the government travel card. The cardholder did not pay his balance and thus came to the attention of the APC when his name appeared in the delinquency report. Upon determining that the cardholder was able to use the card when not on travel, the APC contacted Bank of America, which was unable to tell the APC who had activated the account. The cardholder’s balance of more than $8,000 was charged off, and he was granted an administrative separation in lieu of a court-martial for offenses unrelated to the travel card misuse, including absence without leave, making false statements, and stealing government property of less than $100. Cardholder #4 was a commander (O-5) with the Naval Reserves assigned to the Naval and Marine Corps Reserve Center in Washington, D.C. Our review showed that Bank of America issued the cardholder a standard card in May 2000, although the cardholder’s credit history indicated serious financial problems before and at the time of card issuance. For example, in October 1998, the cardholder filed for Chapter 7 bankruptcy with only $37,169 in assets against $542,063 in liabilities. Further, in January 2000, right before the Bank of America card was issued, an account with a balance of more than $30,000 was charged off. This Navy commander continued, after the issuance of the government travel card, a pattern of delinquencies on numerous accounts, and in one instance had merchandise repossessed for nonpayment. During fiscal year 2001 and the first 3 months of fiscal year 2002, the cardholder used the government travel card to make numerous personal transactions. Transactions included more than $1,400 to D.B. Entertainment, which owns Baby Dolls Saloon, a gentlemen’s club in Dallas, and more than $700 to Wearever cookware and Partylite Gifts, a manufacturer of candles and candle accessories. A delinquency letter was sent to the cardholder on August 9, 2002, when the account was 120 days past due; however, no documentation existed to indicate that any action was taken prior to this date. Although the cardholder had been placed in salary offset, no other disciplinary action had been taken against the cardholder. As discussed above, some individuals who used the card for improper purposes paid their travel card bills when they became due. We considered these occurrences to be abusive travel card activity because these cardholders benefited by, in effect, getting interest-free loans. Personal use of the card increases the risk of charge-offs, which are costly to the government and the taxpayer. In addition, the high rate of personal use is indicative of the weak internal control environment and the failure of APCs to monitor credit card activities, as discussed later in this report. Table 6 provides examples of the types of abusive charges we found during our review. As shown in table 6, cardholders used their travel cards for a wide variety of personal goods or services. Some transactions were similar to the services procured in table 4. The cards were also used to purchase home electronics, women’s lingerie, tax services, and in one instance, to make bogus charges to the cardholder’s own business. In this instance, an E-5 second class petty officer reservist, whose civilian job is with the U.S. Postal Service, admitted making phony charges of over $7,200 to operate his own limousine service. In these transactions, the reservist used the travel card to pay for bogus services from his own limousine company during the first few days of the card statement cycle. By the second day after the charges were posted, Bank of America would have deposited funds—available for the business’ immediate use—into the limousine business’ bank account. Then, just before the travel card bill became due, the limousine business credited the charge back to the reservist’s government travel card and repaid the funds to Bank of America. This series of transactions had no impact on the travel card balance, yet allowed the business to have an interest-free loan for a period. This pattern was continued over several account cycles. Navy officials were unaware of these transactions until we brought them to their attention and are currently considering what, if any, action should be taken against the cardholder. It is critical that cardholders who misuse their travel cards are identified and held accountable for their actions. The DOD Financial Management Regulation (FMR) states that “commanders or supervisors shall not tolerate misuse of the DOD travel cards and cardholders who do misuse their cards shall be subject to appropriate disciplinary action.” However, DOD and Navy policies and procedures do not define appropriate disciplinary action to help ensure that consistent punitive actions are taken against cardholders who abuse their travel cards. Lacking such guidance, disciplinary actions are left solely to the discretion of commanders and supervisors. As a result, we did not find documentation indicating that commanders and supervisors took any disciplinary actions against almost two-thirds of individuals we reviewed who abused or misused their cards during fiscal year 2001 and the first 6 months of fiscal year 2002. Failure to identify and discipline abusive cardholders will likely result in the Navy continuing to experience the types of potentially fraudulent and abusive activity identified in our work. For many cardholders we inquired about, the misue or abuse of the travel card led Navy officials to counsel cardholders on proper use of the card and the cardholders’ responsibility for timely payment of travel card bills. We found only a few cases where the Navy court-martialed or issued administrative warnings to individuals solely because of card misuse. More often than not, severe disciplinary actions were taken in response to travel card abuse in conjunction with other more serious offenses—such as failing to obey orders or unauthorized absences. In these instances, documented disciplinary actions included dismissal from the Navy. At the sites we audited, the Navy could not provide documentation of disciplinary actions taken against cardholders in 37 of the 57 NSF check cases and charged-off or salary offset accounts we reviewed. For example, cardholder #9 in table 3, whose account was charged off for more than $4,900, did not receive any disciplinary action. Cardholder #5 in table 3 was promoted after his unpaid account balance of almost $4,600 was charged off. Also, we found little evidence that cardholders faced adverse consequences for personal use of the card as long as they paid their travel card bills. Of the 10 cases detailed in table 6, only 1 had evidence of disciplinary action. We saw few indications that supervisors were aware that these abusive transactions occurred. To the extent we found that APCs or supervisors were aware of such travel card abuse, we saw little evidence of disciplinary actions. Further, we found that some individuals who abused their travel card privileges held high-level positions, where they may have been responsible for taking appropriate disciplinary action in response to travel card abuse by personnel within their commands. In instances where these individuals abused the card, they rarely received disciplinary action. For example, a commander became severely delinquent in January 2002 after making more than $2,000 in purchases of inappropriate items such as cookware and adult entertainment. However, there was no indication that this officer’s superior was informed of his delinquency or misuse of the travel card until the account was at least 120 days past due. Consequently, although the cardholder’s account was placed in salary offset, the cardholder was not disciplined. We have reported similar problems with the Army travel card program and in our testimony on the Navy travel card program. As a result, the fiscal year 2003 Department of Defense Appropriations Act, Public Law 107-248, contains provisions that address this problem. Specifically, the Act requires the Secretary of Defense to establish guidelines and procedures for disciplinary actions to be taken against cardholders for improper, fraudulent, or abusive use of the government travel card. We found that many cardholders who had abused the travel card or been involved in potentially fraudulent activities continued to have active security clearances. Both DOD and Navy rules provide that an individual’s finances are one of the factors to be considered in determining whether an individual should be entrusted with a security clearance. The U.S. Department of the Navy Central Adjudication Facility (commonly referred to as DON CAF) is responsible for issuing and updating security clearances for Navy personnel. Secret clearances are updated every 10 years and top- secret clearances are updated every 5 years. During the interim periods, Navy instructions require commanders of personnel with clearances, such as secret or top secret, to submit to DON CAF any evidence of financial irresponsibility on the part of an individual that would affect his or her clearance. Such evidence would include information on financial impropriety, such as excessive indebtedness. DON CAF is to evaluate this information and determine whether to revoke or downgrade the clearance. We found that commanders responsible for referring evidence of financial irresponsibility to DON CAF were sometimes not aware of their subordinates’ financial problems. Consequently, Navy security officials might not be in possession of all information necessary to assess an individual’s security clearance. Our audit found that 27 of 57 travel cardholders we examined whose accounts were charged off or placed in salary offset as of March 2002 still had active secret or top-secret security clearances in August 2002. These financially troubled individuals may present security risks to the Navy. We provided the information we collected on individuals with charged-off accounts to DON CAF for its consideration in determining whether to revoke, change, or renew the individuals’ security clearances. Further guidance for this procedure is also contained in the fiscal year 2003 Defense Appropriations Act. In addition to requiring the Secretary of Defense to establish guidance and procedures for disciplinary actions, the act states that such actions may include (1) review of the security clearance of the cardholders in cases of misuse of the government travel card and (2) modification or revocation of the security clearance in light of such review. A weak overall control environment and ineffective internal controls over the travel card program contributed to the potentially fraudulent and abusive travel card activity and the Navy’s high rates of delinquency and charge-offs. The foundation of all other controls, a strong control environment provides discipline and structure as well as the climate that positively influences the quality of internal controls. Although we observed improvements in the first half of fiscal year 2002, we identified several factors that contributed to a weak overall control environment for fiscal year 2001, including, as discussed previously, few documented disciplinary actions taken against cardholders who abused their travel cards and a lack of management attention and focus on establishing and maintaining the organizational structure and human capital needed to support an effective Navy travel card management program. We found that this overall weak control environment contributed to design flaws and weaknesses in six management control areas needed for an effective travel card program. Specifically, we identified weaknesses in the Navy travel program controls related to (1) travel card issuance, (2) cardholders’ training, (3) APCs’ capacity to carry out assigned duties, (4) procedures for limiting card activation to meet travel needs, (5) procedures for terminating accounts when cardholders leave military service, and (6) access controls over Bank of America’s travel card database. All six of these areas related to two key overall management weaknesses: (1) lack of clear, sufficiently detailed Navy policies and procedures and (2) limited travel card audit and program oversight. First, during fiscal year 2001, the sites we audited used DOD’s travel management regulations DOD FMR (Vol. 9, Ch.3) as the primary source of policy guidance for management of Navy’s travel card program. However, in many areas, the existing guidance was not sufficiently detailed to provide clear, consistent travel management procedures to be followed across all Navy units. Second, as recognized in the DOD Inspector General’s March 2002 summary report on the DOD travel card program, “ecause of its dollar magnitude and mandated use, the DOD travel card program requires continued management emphasis, oversight, and improvement by the DOD. Independent internal audits should continue to be an integral component of management controls.” However, the DOD Inspector General report noted that no internal review reports were issued from fiscal year 1999 through fiscal year 2001 concerning the Navy’s travel card program. According to the NAS, no internal review report related to Navy’s travel card had been issued since then. The Navy’s ability to prevent potentially fraudulent and abusive transactions that can eventually lead to additional delinquencies and charge-offs is significantly weakened if individuals with histories of financial irresponsibility are permitted to receive travel cards. Similar to what we found at Army, the Navy’s practice is to facilitate the issuance of travel cards—with few credit restrictions—to all applicants regardless of whether they have histories of credit problems. Although the DOD FMR provides that all DOD personnel are to use the travel card to pay for official business travel, the policy also provides that exemptions may be granted under a number of circumstances, including for personnel who are denied travel cards for financial irresponsibility. However, DOD’s policy is not clear as to what level of financial irresponsibility by a travel card applicant would constitute a basis for such an exemption. We found no evidence that the Navy exempted any individuals or groups from required acceptance and use of travel cards, even those with histories of severe credit problems. The DOD FMR provides that credit checks be performed on all travel card applicants, unless an applicant declines the conduct of a credit check. In July 1999, Bank of America began conducting credit checks on DOD travel card applicants and used the resulting information as a basis for determining the type of account—restricted or standard—it would provide to new DOD travel applicants. While, as mentioned above, DOD FMR would allow the Navy to exempt individuals with financial irresponsibility from the use of the government travel card, in practice any applicant who does not authorize a credit check, has no credit history, or has a history of credit problems, is issued a restricted travel card with a $2,500 credit limit. All other applicants are issued standard travel cards with a $10,000 credit limit. In January 2002, the Navy further reduced the credit limit on a restricted travel card to $2,000 and the limit on a standard card to $5,000. However, DOD and Navy policy also permit APCs to raise the credit and ATM limits of all cards after they have been issued to meet travel and mission requirements. As discussed previously, many of the Navy travel cardholders that we audited who wrote numerous NSF checks, were severely delinquent, or had their accounts charged off had histories of delinquencies and charge-offs relating to other credit cards, accounts in collection, and numerous bankruptcies. Our analysis of credit application scoring models and credit risk scores used by major credit bureaus confirmed that applicants with low credit scores due to histories of late payments are poor credit risks. Credit bureau officials told us that if their credit rating guidelines for decisions on commercial credit card application approvals were used to make decisions on travel card applicants, a significant number of low- and midlevel enlisted Navy cardholders would not even qualify for the restricted limit cards. A credit history showing accounts with collection agency action or charge-offs poses an even higher credit risk. Any of these problems can be a reason for denying credit in the private sector. However, individuals with no credit history, or little credit history, are generally issued cards with lower credit limits, as reflected by current DOD policy. By authorizing all individuals regardless of past credit history who apply for cards to get them, the Navy has exposed the government to increased losses from increased fees and lost rebates associated with these individuals. Credit industry research and the results of our work demonstrate that individuals with previous late payments are much more likely to have payment problems in the future. Further, as a result of our audit findings and an amendment proposed by Senators Byrd and Grassley, the fiscal year 2003 Department of Defense Appropriations Act requires that the Secretary of Defense evaluate whether an individual is creditworthy before authorizing the issuance of any government travel charge card. An individual found not to be creditworthy may not be issued a government travel charge card. Implementing procedures to assess the creditworthiness of an individual prior to issuing a credit card, and denying a credit card to anyone found not creditworthy as required by the fiscal year 2003 Department of Defense Appropriations Act, should improve delinquency rates and reduce fraud and abuse. The DOD FMR requires that APCs provide training to cardholders on the proper use of the government travel card prior to card issuance. The FMR also requires DOD components to ensure that current cardholders are informed of policy and procedure changes to the travel card program. However, we found that the three case study sites we visited did not provide consistent and periodic training to cardholders. The APCs we interviewed generally informed us that they viewed the signature on a travel card application as indication that the cardholder had read, and understood, the regulations governing the use of the government travel card. In addition, the APCs stated that the cardholders also received a statement of understanding when they were issued a travel card. Only one APC informed us that she discussed travel card restrictions with employees at the time they submitted the travel card applications, and that the fleet support group periodically provided individuals with briefings on proper travel card use. The failure to provide standardized, consistent, and periodic training on travel card procedures might have contributed, in part, to high incidences of misuse because individuals did not fully understand the rules governing travel card usage. DOD policy provides that APCs are the primary focal points for day-to-day management of the travel card program. However, at units with low- and midlevel military personnel who are often deployed, APC duties are generally “other duties as assigned.” This exacerbated an already existing disposition towards delinquency of these individuals, as discussed above. Further, the sheer number of responsibilities assigned to APCs, coupled with issues concerning APC span of control and training, greatly affected the APCs’ abilities to carry out their critical duties effectively. Consequently, we found that APCs were generally ineffective in performing their key travel card program management oversight duties. However, the proactive measures by a full-time APC contributed to a low delinquency rate at one installation we audited. As prescribed by the DOD FMR, APCs “are responsible for the day-to-day operations of the DOD Travel Card Program.” DOD FMR volume 9, chapter 3, provides that APCs are responsible for a variety of key duties, including establishing and canceling cardholder accounts, tracking cardholder transfers and terminations, monitoring and taking appropriate actions with respect to account delinquencies, interacting with the bank, and fielding questions about the program from both cardholders and supervisors. APCs are also required to notify commanders and supervisors of all travel card misuse so they can take appropriate actions. We found distinct differences in how APC duties were assigned at the three case study sites. At Camp Lejeune, a military installation, the six APCs that we interviewed were primarily responsible for other duties. For example, some were assigned duties as personnel officers in units providing specialized training for infantry and engineering. These individuals’ APC responsibilities were “other duty as assigned,” and most spent less than 20 percent of their time carrying out these duties. Additionally, one APC indicated to us that it was a challenge to keep up with his APC responsibilities, mainly because he was expected first and foremost to perform his primary duties. In contrast, at Patuxent River and Puget Sound Naval Shipyard, two installations with mainly civilian cardholders, the APC role is a full-time post and therefore the APCs spend all of their time carrying out APC responsibilities. Most of the APCs at the case study sites focused monitoring efforts on delinquencies, and rarely conducted detailed review of charge card transactions. All APCs have access to account transaction activity reports and declination reports, which detail activities that were rejected by Bank of America and thus would be useful in identifying individuals who might have attempted to misuse the card. One APC interviewed told us that detailed transaction reviews were too time-consuming. If she reviewed account activities at all, it was in conjunction with, and after she had identified delinquent accounts. Failure to systematically and regularly review transaction activities meant that most APCs were not able to promptly detect, and therefore take further actions to prevent, abusive travel card activity. This is illustrated by the fact that personal use of the card was estimated to be 27 percent at one site we audited. In contrast, the APC at another case study site informed us that she reviewed delinquency reports several times a month to identify and promptly notify supervisors about the status of delinquent accounts. She also told us that, in addition, she monitored transactions in the Bank of America database for improper and abusive uses of the card monthly, and sent out notices to the cardholders and the cardholder supervisors if such transactions were identified. We believe these proactive actions contributed to that site’s low delinquency rate and fewer incidences of personal use. Failure to review cardholder transactions and take action to address inappropriate card usage can lead to delinquencies and account charge- offs. For example, one APC was not aware that a cardholder within her sphere of responsibility made 17 personal use transactions to Herbalife International, as shown in table 6, from January 2001 to May 2001 until the cardholder became delinquent in August 2001. By that time, the cardholder had charged over $6,750 to the vitamin company. In another example, an APC did not detect that a cardholder had misused his card to purchase over $6,250 in electronic and computer equipment until he appeared in the delinquency report. His account balance of more than $8,000 was subsequently charged off. The DOD’s FMR guidance does not address the appropriate span of control for an APC—the number of cardholders that an APC should be responsible for managing and overseeing. A reasonable span of control is critical for effective management and proper travel program oversight. In addition, because APC duties often are assigned as collateral duties, the span of control should be commensurate with the time available to carry out APC responsibilities effectively. As shown in table 7, at the three sites we audited, the average ratio of cardholders to APCs ranged from 214 to 1 to 5,984 to 1. While table 7 shows the average span of control, the actual span of control for the APCs at the three sites we audited ranged from a low of 25 to about 6,000 cardholders. Bank of America guidance provides that an optimal span of control is 100 cardholders per APC. While we did not evaluate the guidance provided by Bank of America, we believe that one APC cannot effectively carry out all management and oversight responsibilities discussed previously if he or she, even working full time, has responsibility for hundreds or thousands of cardholders. In fact, the supervisor of one APC with about 6,000 cardholders informed us that the APC simply did not have time to systematically perform other types of monitoring beyond identifying and notifying supervisors and commanders of delinquent accounts. Decisions on the optimal span of control must take into account not only the number of accounts for which the APC has direct responsibility, but also the number of accounts for which a lower-level APC has direct responsibility. For example, an APC at Patuxent River had direct responsibility for 2,244 cardholders and oversight responsibility for another 5,560 cardholders. Our internal control standards state that management’s commitment to competence and good human capital practices are key factors in establishing and maintaining a strong internal control environment. Specifically, our standards provide that management identify appropriate knowledge and skills required for various jobs and provide needed training. They also state that establishing appropriate human capital practices, including hiring, training, evaluating, counseling, and disciplining personnel, is another critical environmental factor. DOD policy provides that travel card training materials are to be distributed throughout the department and that APCs are to be informed of policy and procedural changes relating to the travel card program. However, neither DOD nor Navy procedures detail requirements for the extent, timing, and documentation of travel program training for APCs. APCs are not required to receive training on the duties of the position or on how to use available Web-based tools and reports from Bank of America before they assume their APC duties. We found that APC training had not been considered a priority. Of the nine APCs we spoke to, only one had received official APC training. The other eight told us they relied heavily upon on-the-job learning, trial and error, or other program coordinators for advice on how to carry out their duties when they assumed their APC responsibilities. One full-time APC had been in her position for more than 2 years but had not attended formal Bank of America training, even though training seminars are offered annually. Some APCs we interviewed indicated that they were not proficient in using the tools available through the Bank of America Web-based system containing travel card transaction data—Electronic Account Government Ledger System (EAGLS)—to monitor cardholders’ travel activities. The lack of emphasis on training could negatively affect APCs’ ability to monitor delinquencies and promptly detect and prevent potentially fraudulent and abusive activities. According to data provided by Bank of America, as of May 2002 about 23 percent of the Navy’s APCs had never logged on to EAGLS. Allowing Navy travel cardholders to maintain accounts in an active status when not needed for government travel unnecessarily increases the risk of misuse—through cardholders either mistakenly or intentionally using the card for personal purposes. DOD’s FMR provides that restricted cards are issued to cardholders in an “inactive” status and initially activated only when the cardholders have authorized government travel needs. Standard cards, however, are “active” when they are issued to cardholders. DOD policy guidance does not address deactivating restricted and standard travel cards when not needed for official purposes. Lacking overall policy and procedural guidance in this area, we found instances in which individual commands or sites established their own practices for deactivating restricted cards when individuals were not on travel. In fact, APCs at the case study sites we audited informed us that they generally deactivated restricted cards when individuals were not on travel. In contrast, during fiscal year 2001 and most of fiscal year 2002, the standard cards were issued in an “active” status, and remained active when individuals were not traveling. Leaving cards in active status increased the risk of misuse, as supported by our statistical sampling work, which showed that most improper use occurred while the individuals were not on official travel. Recognizing this internal control weakness, the Navy issued a directive in April 2002 requiring that the U.S. Marine Corps, which continued to have a high delinquency rate, deactivate cards for all personnel not scheduled for official travel. The directive also required that, once activated for official travel, the cards be deactivated immediately upon the conclusion of official travel. We found that the Navy lacks clear, sufficiently detailed procedures that would ensure that travel cards are deactivated or terminated when cardholders leave the Navy. DOD’s FMR provides that APCs are responsible for terminating travel cards when cardholders retire, separate, or are dismissed from DOD. Operating procedures established by individual Navy commands and installations to notify APCs in the case of retirement or separation of employees were neither consistent nor effective. Controls were also ineffective in ensuring that prompt actions were taken to deactivate or terminate cards even when the APC is notified. Consequently, some cardholders’ accounts remained active, creating an opportunity for abuse. In general, the three case study sites had standard exit procedures, which required a signature from the APC, or the unit where the APC worked, before individuals could complete outprocessing. The purpose of such procedures is to ensure that travel cards are promptly deactivated or closed. However, our work found that these procedures were not always followed. For example, at one case study site, the APC is a checkpoint on the checkout list, and cardholders are expected to obtain the APC’s signature before completing outprocessing. However, there was no control at the unit where the cardholder turned in the checkout list to ensure that the list was complete. Consequently, the APC informed us that exit procedures were not effective. We also found that the Navy did not have procedures requiring periodic comparisons between active travel card accounts and their employees to ensure that accounts of separated or retired employees were closed. All three case study sites we visited maintained databases of their active employees. However, the APCs at these locations generally did not compare these records against the list of active travel card accounts to identify accounts that should have been deactivated and/or closed but remained open. Periodic reconciliation of the two lists would have enabled these units to identify separated cardholders with active accounts so that appropriate, timely actions could be taken. Ineffective exit procedures and the inability to effectively identify and terminate travel cards of individuals no longer in the Navy led to numerous travel card abuses and charge-offs. These separated Navy employees benefited by using the travel cards to purchase a variety of goods and services, possibly at discounted government rates. Some did not pay their monthly bills, thereby essentially obtaining the personal items for no cost. The following cases are examples of what can happen when travel cards are not effectively deactivated or closed upon separation. In one Navy unit, a cardholder died in October 1999. However, ineffective controls over the notification process resulted in the APC not being aware that this had occurred. Therefore, the APC did not take actions to close this individual’s government travel card account. Consequently, in October 2000, when the old card was about to expire, Bank of America mailed a new card to the address of record. When the card was returned with a forwarding address, the bank remailed the card and the personal identification number, which is used to activate the card, to the new address without performing other verification procedures. The card was activated in mid-December 2000, and within a month, 81 fraudulent transactions for hotel, food, and gas totaling about $3,600 were charged to the card. In January 2001, in the course of her monthly travel card monitoring, the APC noticed suspicious charges in the vicinity of the cardholder’s previous post-of-duty. The APC took immediate action to deactivate the card, thus preventing additional charges from occurring. Upon finally learning of the cardholder’s death from the cardholder’s unit, the APC immediately reported the case to a Bank of America fraud investigator. Investigations indicated that a family member of the cardholder might have made these charges. No payment was ever made on this account, and the entire amount was subsequently charged off. We referred this case to the U.S. Secret Service Credit Card Task Force for further investigation and potential prosecution. A chief warrant officer (W-3) at Naval Air Force, U.S. Atlantic Fleet, repeatedly used his travel card after his retirement on December 1, 2000. The cardholder currently works for a private company. He used the government travel card since his retirement to make charges totaling more than $41,000 for hotels, car rentals, restaurants, and airline tickets for personal and business purposes. In a number of instances, the cardholder was able to obtain the government rate—which can be substantially lower than the commercial rate—for lodging in San Diego, Philadelphia, and Cincinnati. Because the Navy does not routinely monitor cardholder transactions for abusive activity and because this particular account was always paid in full, abusive activity was not detected. Bank of America data showed that the cardholder’s account was still open in early September 2002 and thus available for further charges. In another instance, a mechanic trainee at the Puget Sound Naval Shipyard was convicted of a felony for illegal possession of a firearm in October 2000 and placed on indefinite suspension by his employer in November 2000. However, neither the security office, which took action against the employee, nor the office where the individual worked notified the APC to cancel or deactivate the cardholder’s government travel card account. Following his suspension, the cardholder used the government travel card to make numerous cash withdrawals and purchases totaling almost $4,700. The APC was not aware of these abusive charges until the monthly delinquency review identified the account as delinquent. The account balance of $1,600 was subsequently charged off in January 2002. Although security officers at the Puget Sound Naval Shipyard referred the case to DON CAF in October 2000, our work indicated that the employee, who was still in suspended status as of August 2002, continued to maintain a secret clearance, despite the travel card charge-off and felony conviction. We also found instances where the APC did not promptly deactivate or terminate the travel card upon being notified of an employee’s death, retirement, dismissal, or separation from the Navy. At one case study site, we audited 10 accounts of employees who died, retired, separated, or were otherwise removed since November 2000. Of the 10, 4 cardholders obtained signatures from the travel branch, where the APC works, upon leaving the unit. However, 3 of these 4 accounts were not deactivated or terminated in a timely manner. In one case, a cardholder continued to use the card to make numerous charges totaling $4,900 for more than 9 months following separation. The cardholder failed to make timely payments on her account and became delinquent in September 2001. The APC did not report this cardholder’s delinquent status to the appropriate unit supervisor until the account was 90 days past due. The supervisor stated that she took actions to have the card deactivated immediately upon learning of the delinquency. The individual’s account was charged off on November 27, 2001, and as of July 13, 2002, had a remaining balance of $4,800. Available data also indicated that another cardholder who retired in August 2001 continued to maintain possession of an active card until September 2002, although he did not use the card. Failure to promptly deactivate or terminate travel card accounts of individuals no longer with the Navy increases the risk of delinquencies and charge-offs and can lead to increased cost to the Navy. Thousands of Bank of America and DOD employees have access to Bank of America’s travel card transaction data system, known as EAGLS. Computer system access controls are intended to permit authorized users to access the system to perform their assigned duties and preclude unauthorized persons from gaining access to sensitive information. Access to EAGLS is intended to be limited to authorized users to meet their information needs and organizational responsibilities. Authorized EAGLS users access levels include customer-level access (APCs requiring access to travel data for cardholders under their purview and individual travelers requiring access to their own travel transaction histories) and bank employee-level access (Bank of America employees may be granted one of five different levels of access depending on their assigned duties). The highest level of Bank of America employee access to EAGLS is the “super user” level. According to Bank of America security officials, this level of access—which provides users the ability to add, delete, or modify anything in the system, including creating accounts and editing transaction data in the system—should be granted to as few individuals as possible. We found that 1,127 Bank of America employees had some level of access to the EAGLS system, including 285 with super-user-level access. After we brought this matter to the attention of Bank of America security officials, they reviewed employee access and deactivated access for 655 employees that they determined should not have had any level of access. This included 22 employees with super-user access. Further, Bank of America has since initiated periodic reviews to ensure that it maintains appropriate levels of employee access. In addition, DOD employees retained APC access to EAGLS after relinquishing their APC duties or after they may have been transferred or terminated. In a 2000 survey of 4,952 individuals with APC-level access to EAGLS, DOD found that approximately 10 percent could not be located and may have been transferred or terminated or no longer had APC responsibilities. Because of concern that many of these accounts should be deactivated, Bank of America has begun a review to determine if DOD employees with APC-level access no longer have APC responsibilities or have left the service. With the weak control environment and related program control weaknesses we identified, it is not surprising that we found weaknesses in the implementation of selected key control activities we statistically tested at the three Navy sites we audited. We selected four key control activities to test related to basic travel transaction and voucher processing. As discussed previously, for the three locations, we estimate that the percentage of transactions during fiscal year 2001 that represented personal use varied from 7 percent at one location to 27 percent at another location. We tested the implementation of the following internal control activities for a statistically valid sample of travel card transactions. Was there a travel order associated with the transaction that was approved prior to the start of travel? Was there a travel voucher associated with the transaction that was properly reviewed to ensure that payment was accurate and properly supported? Did the traveler submit a travel voucher associated with the transaction to the installation travel office for processing within 5 days of completion of travel, as required by government travel regulations? In accordance with TTRA and the DOD FMR, was the traveler paid within 30 days of the date a properly approved travel voucher associated with the transaction was submitted for payment? Table 8 shows the results of our statistical samples. Appendix II includes the specific criteria we used to assess the effectiveness of these controls. Timely approval of the travel orders is the first step in ensuring that travel is authorized. At one of the three installations we audited, Patuxent River, the controls over travel order approval were partially effective. In contrast, Puget Sound Naval Shipyard, which had a failure rate of 49 percent, had ineffective controls over travel order approval. At Puget Sound, the high failure rate was primarily attributable to travel personnel not consistently ensuring that all copies of the six-part travel orders used in fiscal year 2001 were signed before sending the originals to the travelers. Consequently, this unit was unable to provide us with signed copies of the travel orders. Puget Sound Naval Shipyard management informed us that it had recently instituted procedures that require signed copies of travel orders be maintained by the unit. Once travel is completed, the traveler is required to submit a voucher for all reimbursable expenses and must include receipts for certain claimed amounts. The voucher review process is intended to ensure that only authorized, properly supported travel charges are reimbursed and that the amounts are accurately calculated. All three case study sites we audited had ineffective controls to ensure that travel orders were properly reviewed for accuracy and support. The estimated failure rates during fiscal year 2001 for the three case study sites ranged from 33 to 40 percent. Travel voucher errors resulted in both over- and underpayments to the traveler and created an additional administrative burden for the Navy, which had to take additional actions to recover overpayments or make payments on previous underpayments. Travel voucher errors were attributed to ineffective review and audit of travel vouchers. At one case study site we audited, a communication breakdown had occurred between the office that helped travelers prepare vouchers and the office that entered voucher data into the automated system used to record relevant travel voucher data so that payment could be made by DFAS. At this site, each office thought that the other was responsible for reviewing the vouchers for accuracy. As a result, the vouchers were not consistently reviewed to ensure that they were filed in accordance with travel regulations. In addition, we found that the voucher auditing process was not effective, resulting in payment errors that should have been detected. In our samples, we found that most errors were in the following categories. Missing or inconclusive receipts – We found instances in which voucher packages did not include all receipts required to support claims, as required by DOD and Navy regulations, yet payments were made. For example, a cardholder at Puget Sound Naval Shipyard who claimed cell phone charges totaling more than $1,000 on several partial vouchers did not submit a detailed breakdown of these phone charges. As a result, there was no indication that all of the charges were for official use. However, the voucher was processed and full payment was made to the traveler. Errors in calculating amounts paid – We found instances in which the voucher processing units paid for lodging expenses not incurred and made other errors in calculating incidental expenses, resulting in both over- and underpayments to the traveler. At Patuxent River, one traveler was reimbursed $395 in lodging expenses and $33 in lodging taxes; however, the hotel receipt for this travel claim indicated lodging expenses of $316 and lodging taxes of $24. Thus, the traveler was overpaid a total of $88. Other errors related to the reimbursement of telephone calls and car mileage, and the failure to pay excess baggage fees expressly authorized in the travel order. Other errors related to the transposition of numbers. Most of these errors were relatively small in terms of dollar amounts. However, we found errors that were significant in comparison to the travel voucher amount. For example, at one case study site a traveler claimed an ATM fee of $17.25 on a voucher totaling less than $1,000, but the amount was entered into the travel reimbursement system as $1,725. As a result, the cardholder was overpaid by more than $1,700. Although this voucher was audited by the voucher processing unit, the error was not detected. As a result of our audit, the Navy unit has taken actions to recover this and other overpayments. The intent of the travel card program was to improve convenience for the traveler and to reduce the government’s costs of administering travel. However, when the Navy implemented the travel card as part of its travel program, it did not provide the control infrastructure—primarily human capital—necessary to manage and oversee the use of government travel cards. Consequently, a weak internal control environment in the travel card program has resulted in a significant level of delinquencies and charge-offs of bad debts, as well as travel card fraud and abuse. This has resulted in millions of dollars of costs to the Navy, including higher fees, lost rebates, and substantial time pursuing and collecting delinquent travel card accounts. DOD and the Navy have taken positive steps to reduce the delinquencies and charge-offs, including establishing a system of wage and retirement payment offset for many employees, encouraging the use of split disbursements where travel reimbursements are sent directly to the bank rather than the employee, and making management of the travel program a priority for the Navy commands. These actions have resulted in significant collections of previously charged-off and delinquent accounts. DOD and the Navy have also proposed additional steps as reported in the June 27, 2002, DOD Charge Card Task Force report to improve the controls over the travel card program. However, these Navy and DOD actions have primarily addressed the symptoms rather than the underlying causes of the problems with the program. Specifically, actions to date have focused on dealing with accounts that are seriously delinquent, which are back-end or detective controls rather than preventive controls. To effectively reform the travel program, DOD and the Navy will need to work to prevent potentially fraudulent and abusive activity and severe credit problems with the travel card. The fiscal year 2003 Department of Defense Appropriations Act requires the Secretary of Defense to establish guidelines and procedures for disciplinary actions to be taken against cardholders for improper, fraudulent, or abusive use of the government travel card and to deny issuance of the government travel card to individuals who are not creditworthy. Further, the Bob Stump National Defense Authorization Act for Fiscal Year 2003 provides authority for the Secretary of Defense to require (1) use of the split disbursement payment process, where any part of a DOD employee’s or service member’s travel reimbursement is paid directly to the travel card-issuing bank, and (2) deductions of prescribed amounts from salary and retirement pay of DOD employees or service members, including civilian and military retirees, who have delinquent travel card balances and payment of those amounts to the travel card- issuing bank. To strengthen the overall control environment and improve internal control for the Navy’s travel card program, we recommend that the Secretary of the Navy take the following actions. We also recommend that the Under Secretary of Defense (Comptroller) assess the following recommendations and, where applicable, incorporate them into or supplement the DOD Charge Card Task Force recommendations to improve travel card policies and procedures throughout DOD. We recommend that the Secretary of the Navy establish specific policies and procedures governing the issuance of individual travel cards to military and civilian employees, including the following: Provide individuals with no prior credit histories with “restricted” travel cards with low credit and ATM limits. Develop procedures to periodically evaluate frequency of card usage to identify accounts of infrequent travelers. Cancel accounts for current infrequent travelers, as noted in the Charge Card Task Force report, in order to minimize exposure to fraud and abuse. Evaluate the feasibility of activating and deactivating all cards, regardless of whether they are standard or restricted cards, so that cards are available for use only during the periods authorized by the cardholders’ travel orders. At a minimum, this policy should focus on controlling travel card use by “high-risk” enlisted military personnel in the E-1 to E-6 grades. Develop comprehensive, consistent Navy-wide initial training and periodic refresher training for travel cardholders, focused on the purpose of the program and appropriate uses of the card. The training should emphasize the prohibitions on personal use of the card, including gambling, personal travel, and adult entertainment. Such training should also address the policies and procedures of the travel order, voucher, and payment processes. For entry-level personnel, the training should also include information on basic personal financial management techniques to help avoid financial problems that could affect an individual’s ability to pay his or her travel card bill. We recommend that the Secretary of the Navy establish the following specific policies and procedures to strengthen controls and disciplinary actions for improper use of the travel card: Establish guidance regarding the knowledge, skills, and abilities required to carry out APC responsibilities effectively. Establish guidance on APC span-of-control responsibilities so that such responsibilities are properly aligned with time available to ensure effective performance. Determine whether certain APC positions should be staffed on a full-time basis rather than as collateral duties. Establish Navy-wide procedures to provide assurance that APCs receive training on their APC responsibilities. The training should include how to use EAGLS transaction reports and other available data to monitor cardholder use of the travel card—for example, reviewing account transaction histories to ascertain whether transactions are incurred during periods of authorized travel and appear to be appropriate travel expenses and are from approved MCCs. Establish guidance requiring APCs to review EAGLS reports to identify cardholders who have written NSF checks for payment of their account balances, and refer these employees for counseling or disciplinary action. Investigate and, if warranted, take appropriate disciplinary actions against cardholders who wrote three or more NSF checks to Bank of America. Establish Navy procedures to develop a data mining program to further facilitate APCs’ ability to identify potentially inappropriate transactions for further review. Establish Navy-wide procedures requiring that supervisors and commanders notify APCs of actions taken with respect to delinquent cardholders. Establish a Navy requirement for cognizant APCs to retain records documenting cardholders’ fraudulent or abusive use of the travel card. Establish appropriate, consistent Navy-wide procedures as a guide for taking disciplinary actions with respect to fraudulent and abusive activity and delinquency related to the travel card. Review records of individuals whose accounts have been charged off or placed in salary offset to determine whether they have been referred to DON CAF for security reviews. Strengthen procedures used to process employees separating from the service to ensure that all accounts are deactivated or closed, and repayment of any outstanding debts is arranged. Perform periodic review of exit procedures to determine that accounts of separated cardholders are deactivated or closed in a timely manner. Develop procedures to identify active cards of separated cardholders, including comparing cardholder and payroll data. Review, in conjunction with Bank of America, individuals with APC- level access to EAGLS to limit such access to only those with current APC duties. Develop a management plan to ensure that audits of the Navy travel card program are conducted regularly, and the results are reported to senior management. To improve travel voucher accuracy, we recommend that commanders at each unit identify causes of the high error rates related to travel voucher review and provide refresher training to ensure that voucher examiners and auditors are informed and can accurately apply travel regulations and updates. To ensure that travel vouchers are consistently reviewed prior to processing, we recommend that the Commander of Puget Sound Naval Shipyard take the following actions: Issue procedures to clearly assign responsibilities for reviewing the accuracy of the travel vouchers. Conduct periodic review to assess the effectiveness of the new procedures in reducing the frequency and amount of voucher errors. In written comments on a draft of this report, which are reprinted in appendix V, DOD concurred with 21 of 23 recommendations and partially concurred with the remaining 2 recommendations. DOD partially concurred with our recommendations regarding (1) establishing Navy-wide procedures requiring that supervisors and commanding officers notify the APCs of actions taken with respect to delinquent cardholders and (2) having commanders at each unit identify causes of the high error rates related to travel voucher review and provide refresher training to voucher examiners and auditors. We believe that DOD’s planned actions for these two areas, if effectively implemented, will address the intent of our recommendations. Concerning our recommendation that APCs be notified of actions by supervisors with respect to delinquent cardholders, DOD responded that providing this type of sensitive information to APCs is not appropriate. DOD considers it to be more appropriate that actions taken with respect to delinquent cardholders be reported up the chain of command and that the department decide at what level and at what frequency this reporting occur. Our recommendation did not contemplate that APCs would necessarily need details of disciplinary action, only that the APCs be informed that actions have been taken and by whom. Often the actions taken include verbal counseling. The written documentation maintained by the APC, which should refer to the official from whom authorized personnel may obtain details of the disciplinary actions, will provide a record that actions were taken and be a source for new commanders/supervisors in identifying people with previous credit card problems. Regarding having commanders identify causes of the high error rate related to travel voucher review and provide refresher training, DOD has requested that NAS conduct a review of the department’s end-to-end travel process and make recommendations to improve accountability and efficiency. Upon completion of the NAS review, DOD said it will distribute the appropriate guidance to all major commands. We agree that it would be beneficial for NAS to perform a comprehensive review of the travel process. In addition, to ensure immediate results, we believe that commanders, who are ultimately responsible and are more involved in the day-to-day operations, should take proactive steps in reviewing and correcting the weaknesses identified in this report. In addition, although DOD concurred with our recommendations to establish policies and procedures governing the issuance of individual travel cards to military and civilian employees, its response regarding employees with no prior credit history indicated that some may be issued cards with “…higher than ‘restricted’ limits to accomplish their mission.” While this may be required on a case-by-case basis, we believe that additional preventive managerial oversight to monitor these accounts would be beneficial. Management should also consider lowering the limit to established restricted levels once the mission is completed. As agreed with your offices, unless you announce the contents of this report earlier, we will not distribute this report until 30 days from its date. At that time, we will send copies of this report to interested congressional committees, the Secretary of Defense, the Under Secretary of Defense (Comptroller), the Secretary of the Navy, and the Director of the Office of Management and Budget. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact Gregory D. Kutz at (202) 512-9095 or kutzg@gao.gov or John J. Ryan at (202) 512-9587 or ryanj@gao.gov if you or your staffs have any questions concerning this report. In 1983, the General Services Administration (GSA) awarded a governmentwide master contract with a private company to provide government-sponsored, contractor-issued travel cards to federal employees to be used to pay for costs incurred on official business travel. The intent of the travel card program was to provide increased convenience to the traveler and lower the government’s cost of travel by reducing the need for cash advances to the traveler and the administrative workload associated with processing and reconciling travel advances. The travel card program includes both individually billed accounts—accounts held and paid by individual cardholders—and centrally billed accounts that are used to purchase transportation or for the travel expenses of a unit and are paid directly by the government. As of the end of fiscal year 2001, over 2.1 million individually billed travel cards were issued to federal government travelers. These travel cardholders charged $3.6 billion in the same fiscal year. Under the current GSA master contract, the Department of Defense entered into a tailored task order with Bank of America to provide travel card services to DOD and the military services, including the Navy. Table 9 provides the number of individually billed travel cards outstanding and related dollar amount of travel card charges by DOD and its components in relation to the total federal government. As shown in table 9, DOD accounts for about 1.4 million, or 66 percent, of the total number of the individually billed travel cards issued by the entire federal government and DOD’s cardholders charged about $2.1 billion, or about 59 percent of the federal government’s travel card charges during fiscal year 2001. Table 9 also shows that the Navy provided about 395,000 individually billed cards to its civilian and military employees as of September 2001. These cardholders charged an estimated $510 million to their travel cards during fiscal year 2001. The Travel and Transportation Reform Act of 1998 (Public Law 105-264) expanded the use of government travel cards by mandating the use of the cards for all official travel unless specifically exempted. The act is intended to reduce the overall cost of travel to the federal government through reduced administrative costs and by taking advantage of rebates from the travel card contractor. These rebates are based on the volume of transactions incurred on the card and cardholders paying their monthly travel card bills on time. To help timely payments, the act requires that agencies reimburse cardholders for proper travel claims within 30 days of submission of approved travel vouchers by the cardholders. Further, the act allows, but does not require, agencies to offset a cardholder’s pay for amounts the cardholder owes to the travel card contractor as a result of travel card delinquencies not disputed by the cardholder. The act calls for GSA to issue regulations incorporating the requirements of the act. GSA incorporated the act’s requirements into the Federal Travel Regulation. The Federal Travel Regulation governs travel and transportation and relocation allowances for all federal government employees, including overall policies and procedures governing the use of government travel cards. Agencies are required to follow the requirements of GSA’s Federal Travel Regulation, but can augment these regulations with their own implementing regulations. DOD issued its Financial Management Regulations (FMR), Volume 9, Chapter 3, Travel Policies and Procedures to supplement GSA’s travel regulations. DOD’s Joint Travel Regulations, Volume 1 (for Uniformed Service Members), and Volume 2 (for Civilian Personnel) refer to the FMR as the controlling regulation for DOD’s travel cards. Further, in January 2002, the Navy eBusiness Operations Office issued Instruction 4650.1, Policies and Procedures for the Implementation and Use of the Government Travel Charge Card to supplement the FMR. In addition, some of the Navy’s individual commands and units have issued their own instructions supplementing GSA and DOD guidelines. As shown in figure 6, the Navy’s travel card management program for individually billed travel card accounts encompasses card issuance, travel authorization, cardholders charging goods and services on their travel cards, travel voucher processing and payment, and managing travel card usage and delinquencies. When a Navy civilian or military employee or the employee’s supervisor determines that he or she will need a travel card, the employee contacts the unit’s travel card agency program coordinator (APC) to complete an individually billed card account application form. As shown in figure 7, the application requires the applicant to provide pertinent information, including full name and social security number, and indicate whether he or she is an active, reserve, or a civilian employee of the Navy. The applicant is also required to initial a statement on the application acknowledging that he or she has read and understands the terms of the travel card agreement and agrees to be bound by these terms, including a provision acknowledging that the card will be used only for official travel. The APC is required to complete the portion of the member’s application concerning who will be responsible for managing the use and delinquencies related to the card. Bank of America is required to issue a travel card to all applicants for whom it receives completed applications signed by the applicants, the applicants’ supervisors, and the APCs. Bank of America issues travel cards with either a standard or restricted credit limit. If an employee has little or no credit history or poor credit based on a credit check performed by Bank of America, it will suggest to the service that the applicant receive a restricted credit limit of $2,500 instead of the standard credit limit of $10,000. However, as shown in figure 7, the application allows the employee to withhold permission for Bank of America to obtain credit reports. If this option is selected, Bank of America automatically issues a restricted credit limit card to the applicant. When cardholders leave the Navy, they are required to contact their APCs and notify them of their planned departure. Based on this notification from the cardholders, the APCs are to deactivate or terminate the cardholders’ accounts. When a cardholder is required to travel for official government purposes, he or she is issued a travel order authorizing travel. The travel order is required to specify the timing and purpose of the travel authorized. For example, the travel order is to authorize the mode of transportation, the duration and points of the travel, and the amounts of per diem and any cash advances. Further, the Navy can limit the amount of authorized reimbursement to military members based on the availability of lodging and dining facilities at military installations. For authorized travel, travelers must use their cards to pay for allowable expenses such as hotels and rental cars. The Navy generally uses a centrally billed transportation account to pay for air and rail transportation. Also, some units utilize unit cards, a form of centrally billed account, in lieu of travel charge cards for individually billed accounts for meals and lodging for group trips. When the travel card is submitted to a merchant, the merchant will process the charge through its banking institution, which in turn charges Bank of America. At the end of each banking cycle (once each month), Bank of America prepares a billing statement that is mailed to the cardholder for the amounts charged to the card. The statement also reflects all payments and credits made to the cardholder’s account. Bank of America requires that the cardholder make payment on the account in full within 30 days of the statement closing date. If the cardholder does not pay his or her monthly billing statement in full, and does not dispute the charges within 60 days of the statement closing date, the account is considered delinquent. Within 5 working days of return from travel, the cardholder is required to submit a travel voucher claiming legitimate and allowable expenses incurred while on travel. Further, the standard is for the cardholder to submit an interim voucher every 30 days for extended travel of more than 45 days. The amount that cardholders are reimbursed for their meals and incidental expenses and hotels is limited by geographical rates established by GSA. Upon submission of a proper voucher by the cardholder, the Navy has 30 days in which to make reimbursement without incurring late payment fees. Cardholders are required to submit their travel vouchers to their supervisors or other designated approving officials who must review the vouchers and approve them for payment. If the review finds an omission or error in a voucher or its required supporting documentation, the approving official must inform the traveler of the error or omission. If the payment of the approved proper voucher takes longer than 30 days, the Navy is required to pay the cardholder a late payment fee plus an amount equal to the amount Bank of America would have been entitled to charge the cardholder had the cardholder not paid the bill by the due date. After the supervisor approves a cardholder’s travel voucher package for payment, it is processed by a voucher processing unit at the location to which the cardholder is assigned. The voucher processing unit enters travel information from the approved voucher into DOD’s Integrated Automated Travel System (IATS). IATS calculates the amount of per diem authorized in the travel order and voucher and the amount of mileage, if any, claimed by the cardholder. In addition, any other expenses claimed and approved are entered into IATS. Once the travel information from the voucher has been entered into IATS, the voucher may be selected for further review or “audit.” IATS selects 10 percent of vouchers under $2,500 and all vouchers $2,500 or greater for audits. If problems with the voucher are found during the initial entry of the information into IATS or during the audit of the information, the transaction can be rejected and returned to the cardholder for correction. Once the vouchers are processed and audited, they are sent to DFAS for payment to the cardholder or to Bank of America and the cardholder, if the cardholder elected split disbursements whereby part of the DFAS reimbursement is sent to Bank of America. In addition to controlling the issuance and credit limits related to the travel card, APCs are also responsible for monitoring the use of and delinquencies related to travel card accounts for which they have been assigned management responsibility. Bank of America’s Web-based Electronic Account Government Ledger System (EAGLS) provides on-line tools that are intended to assist APCs in monitoring travel card activity and related delinquencies. Specifically, APCs can access EAGLS to monitor and extract reports on their cardholders’ travel card transaction activity and related payment histories. Both the Navy and Bank of America have a role in managing travel card delinquencies under GSA’s master contract. While APCs are responsible for monitoring cardholders’ accounts and for working with cardholders’ supervisors to address any travel card payment delinquencies, Bank of America is required to use EAGLS to notify the designated APCs if any of their cardholders’ accounts are in danger of suspension or cancellation. When Bank of America has not received a required payment on any travel cardholder’s account within 60 days of the billing statement closing date, it is considered delinquent. As summarized in figure 8, there are specific actions required by both DOD and Bank of America based on the number of days a cardholder’s account is past due. The following is a more detailed explanation of the required actions by DOD and/or Bank of America with respect to delinquent travel card accounts. 45 days past due—Bank of America is to send a letter to the cardholder requesting payment. Bank of America has the option to call the cardholder with a reminder that payment is past due and to advise the cardholder that the account will be suspended if it becomes 60 days past due. 55 days past due—Bank of America is to send the cardholder a presuspension letter warning that Bank of America will suspend the account if it is not paid. If Bank of America suspends an account, the card cannot be used until the account is paid. 60 days past due—The APC is to issue a 60-day delinquency notification memorandum to the cardholder and to the cardholder’s immediate supervisor, informing them that the cardholder’s account has been suspended due to nonpayment. The next day, a suspension letter is to be sent by Bank of America to the cardholder providing notice that the card has been suspended until payment is received. 75 days past due—Bank of America is to assess the account a late fee. The late fee charged by Bank of America was $20 through August 9, 2001. Effective August 10, 2001, Bank of America increased the late fee to $29 under the terms of the contract modification between Bank of America and DOD. Bank of America is allowed to assess an additional late fee every 30 days until the account is made current or charged off. 90 days past due—The APC is to issue a 90-day delinquency notification memorandum to the cardholder, the cardholder’s immediate supervisor, and the company commander (or unit director). The company commander is to initiate an investigation into the delinquency and take appropriate action, at the company commander’s discretion. At the same time, Bank of America is to send a “due process letter” to the cardholder providing notice that the account will be canceled if payment is not received within 30 days unless he or she enters into a payment plan, disputes the charge(s) in question, or declares bankruptcy. 120 days past due—The APC is to issue a 120-day delinquency notification memorandum to the cardholder’s commanding officer. At 126 days past due, the account is to be canceled by Bank of America. Beginning in October 2001, once accounts were 120 days past due, Bank of America began sending files to DFAS listing these accounts for salary offset. 180 days past due—Bank of America is to send “precharge-off” or last call letters to cardholders whose accounts were not put in salary offset informing them that Bank of America will charge off their accounts and report them to a credit bureau if payment is not received. A credit bureau is a service that reports the credit history of an individual. Banks and other businesses assess the creditworthiness of an individual using credit bureau reports. 210 days past due—Bank of America is to chargeoff any delinquent account that it was unable to put in the offset program and, if the balance is $50 or greater, report it to a credit bureau, unless another form of payments was forthcoming. Some accounts are pursued for collection by Bank of America’s recovery department, while others are sent to attorneys or collection agencies for recovery. The delinquency management process can be suspended when a cardholder’s APC informs Bank of America that the cardholder is on official travel, but is unable to submit vouchers and make timely payments on his or her account, through no fault of his or her own. Under such circumstances, the APC is to notify Bank of America that the cardholder is in “mission-critical” status. By activating this status, Bank of America is precluded from identifying the cardholder’s account as delinquent until 45 days after such time as the APC determines the cardholder is to be removed from mission-critical status. According to Bank of America, approximately 800 to 1,000 cardholders throughout DOD were in this status at any given time throughout fiscal year 2001. Pursuant to a joint request by the Chairman and Ranking Minority Member of the Subcommittee on Government Efficiency, Financial Management and Intergovernmental Relations, House Committee on Government Reform, and the Ranking Minority Member of the Senate Committee on Finance, we audited the controls over the issuance, use, and monitoring of individually billed travel cards and associated travel processing and management for the Department of the Navy. Our assessment covered the reported magnitude and impact of delinquent and charged-off Navy travel card accounts for fiscal year 2001 and the first 6 months of fiscal year 2002, along with an analysis of causes and related corrective actions; an analysis of the universe of Navy travel card transactions during fiscal year 2001 and the first 6 months of fiscal year 2002 to identify potentially fraudulent and abusive activity related to the travel card; the Navy’s overall management control environment and the design of selected Navy travel program management controls, including controls over (1) travel card issuance, (2) APCs’ capacity to carry out assigned duties, (3) limiting card activation to meet travel needs, (4) transferred and “orphan” accounts, (5) procedures for terminating accounts when cardholders leave military service, and (6) access for Bank of America’s travel card database; and tests of statistical samples of transactions to assess the implementation of key management controls and processes for three Navy units’ travel activity including (1) travel order approval, (2) accuracy of travel voucher payments, (3) the timely submission of travel vouchers by travelers to the approving officials, and (4) the timely processing and reimbursement of travel vouchers by the Navy and DOD. We used as our primary criteria applicable laws and regulations, including the Travel and Transportation Reform Act of 1998 (Public Law 105-264), the General Services Administration’s (GSA) Federal Travel Regulation, and the Department of Defense Financial Management Regulations, Volume 9, Travel Policies and Procedures. We also used as criteria our Standards for Internal Control in Federal Government and our Guide to Evaluating and Testing Controls Over Sensitive Payments. To assess the management control environment, we applied the fundamental concepts and standards in our internal control standards to the practices followed by management in the six areas reviewed. To assess the magnitude and impact of delinquent and charged-off accounts, we compared the Navy’s delinquency and charge-off rates to other DOD services and federal agencies. We did not verify the accuracy of the data provided to us by Bank of America and GSA. We also analyzed the trends in the delinquency and charge-off data from fiscal year 2000 through the first half of fiscal year 2002. We also used data mining to identify Navy travel card transactions for individually billed accounts for audit. Our data mining procedures covered the universe of individually billed Navy travel card activity during fiscal year 2001 and the first 6 months of fiscal year 2002 and identified transactions that we believed were potentially fraudulent or abusive based upon the nature, amount, merchant, and other identifying characteristics of the transaction. However, our work was not designed to identify, and we did not determine, the extent of any potentially fraudulent or abusive activity related to the travel card. To assess the overall control environment for the travel card program at the Department of the Navy, we obtained an understanding of the travel process, including travel card management and oversight, by interviewing officials from the Office of the Undersecretary of Defense, Comptroller; Department of the Navy; Defense Finance and Accounting Service (DFAS); Bank of America; and GSA. We reviewed applicable policies and procedures and program guidance they provided. We visited three Navy units to “walk through” the travel process including the management of travel card usage and delinquency. Further, we contacted one of the three largest U.S. credit bureaus to obtain credit history data and information on how credit scoring models are developed and used by the credit industry for credit reporting. At each of the Navy locations we audited, we also used our review of policies and procedures and the results of our “walk-throughs” of travel processes and other observations to assess the effectiveness of controls over segregation of duties among persons responsible for issuing travel orders, preparing travel vouchers, processing and approving travel vouchers, and certifying travel voucher payments. We also reviewed computer system access controls for Electronic Account Government Ledger System (EAGLS)—the system used by Bank of America to maintain DOD travel card data. To determine whether these controls over EAGLS were effective, we interviewed Bank of America officials and observed EAGLS functions and capabilities. To test the implementation of key controls over individually billed Navy travel card transactions processed through the travel system—including the travel order, travel voucher, and payment processes—we obtained and used the database of fiscal year 2001 Navy travel card transactions to review random samples of transactions at three Navy locations. Because our objective was to test controls over travel card expenses, we excluded credits and miscellaneous debits (such as fees) from the population of transactions used to select random samples of travel card transactions to review at each of the three Navy units we audited. Each sampled transaction was subsequently weighted in the analysis to account statistically for all charged transactions at each of the three units, including those transactions that were not selected. We selected three Navy locations for testing controls over travel card activity based on the relative size of travel card activity at the 27 Navy commands and of the units under these commands, the number and percentage of delinquent accounts, and the number and percentage of accounts written off. We selected one unit from the Naval Sea Systems Command because that command represented 19 percent of the total travel card activity, 9 percent of past due accounts, and 7 percent of accounts charged off during fiscal year 2001. We also selected one unit from Naval Air Systems Command because that command represented approximately 12 percent of travel card activity, 4 percent of past due accounts, and 4 percent of accounts charged off during fiscal year 2001 across the Navy. We also selected U.S. Marine Corps Forces Atlantic because this command represented about 24 percent of Corps charge card activity, 23 percent of accounts past due, and 26 percent of accounts charged off. Each of the units within the commands was selected because of the relative size of the unit within the respective command. Table 10 presents the sites selected and the number of fiscal year 2001 transactions at each location. We performed tests on statistical samples of travel card transactions at each of the three case study sites to assess whether the system of internal controls over the transactions was effective, as well as to provide an estimate of the percentage of transactions by unit that were not for official government travel. For each transaction in our statistical sample, we assessed whether (1) there was an approved travel order prior to the trip, (2) the travel voucher payment was accurate, (3) the travel voucher was submitted within 5 days of the completion of travel, and (4) the traveler was paid within 30 days of the submission of an approved travel voucher. We considered transactions not related to authorized travel to be abuse and incurred for personal purposes. The results of the samples of these control attributes, as well as the estimate for personal use—or abuse—related to travel card activity, can be projected to the population of transactions at the respective test case study site only, not to the population of travel card transactions for all Navy cardholders. We concluded that a control was effective if both the projected point estimate of the failure rate and the upper bound of a one-sided 95 percent confidence interval associated with the estimate were no more than 5 percent. We concluded that a control was ineffective if both the point estimate of the failure rate and the lower bound of a one-sided 95 percent confidence interval associated with the estimate were greater than 10 percent. Otherwise, we concluded that the control was partially effective. Tables 11 through 13 show (1) the results of our tests of key attributes, (2) the point estimates of the failure rates for the attributes, and (3) the two- sided 95 percent confidence intervals for the failure rates for each attribute. Table 11 shows the results of our test of the key control related to the authorization of travel—(approved travel orders were prepared prior to dates of travel). Table 12 shows the results of our test for effectiveness of controls in place over the accuracy of travel voucher payments. Table 13 shows the results of our tests of two key controls related to timely processing of claims for reimbursement of expenses related to government travel—timely submission of the travel voucher by the employee and timely approval and payment processing. To determine if cardholders were reimbursed within 30 days, we used payment dates provided by DFAS. We did not independently validate the accuracy of these reported payment dates. We briefed Navy managers, including Assistant Secretary of the Navy (Financial Management and Comptroller) officials; and unit commanders and APCs of the details of our audit, including our findings and their implications. We incorporated their comments where appropriate. We conducted our audit work from December 2001 through October 2002 in accordance with generally accepted government auditing standards, and we performed our investigative work in accordance with standards prescribed by the President’s Council on Integrity and Efficiency. We received DOD comments on a draft of this report from the Under Secretary of Defense (Comptroller) dated December 5, 2002, and have reprinted those comments in appendix V. Table 14 shows the travel card delinquency rates for Navy’s major commands (and other Navy organizational units at a comparable level) that had outstanding balances over $1 million as of March 31, 2002. Commands with a March 31, 2002, balance outstanding under $1 million have been combined into "other." The Navy’s commands and other units are listed in descending order based on their respective delinquency rates as of March 31, 2002. The delinquency rates shown represent the total amount delinquent (amounts not paid within 61 days of the travel card monthly statement closing date) as a percentage of total amount owed by the command’s travel cardholders at the end of each quarter. Tables 15, 16, and 17 show the grade, rank (where relevant), and the associated basic pay rates for 2001 for Navy’s and Marine Corps’ military personnel and civilians. The basic 2001 pay rates shown exclude other considerations such as locality pay and any allowances for housing or cost of living. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
Poor oversight and management of DOD's travel card program has led to high delinquency rates, costing DOD millions in lost rebates and increased ATM fees. As a result, the Congress asked GAO to report on (1) the magnitude, impact, and cause of delinquencies, (2) the types of fraudulent and abusive uses of the travel card, and (3) the effectiveness of internal controls over DOD's travel card program. GAO previously reported on travel card management at the Army. This report focuses on travel card management at the Navy, including the Marine Corps. As of March 31, 2002, over 8,000 Navy cardholders had $6 million in delinquent debt. For the prior 2 years, the Navy's average delinquency rate of 12 percent was nearly identical to that of the Army, which had the highest federal agency delinquency rate. Since November 1998, Bank of America had charged off nearly 14,000 Navy accounts totaling almost $17 million, and placed many more in a salary offset program similar to garnishment. During the period covered under this review, over 250 Navy personnel might have committed bank fraud by writing three or more nonsufficient fund (NSF) checks to Bank of America. In addition, as shown in the table, many cardholders abusively used the card for inappropriate purchases including prostitution and gambling without Navy management being aware of it. Many of these purchases were made when individuals were not on travel. The Navy's overall delinquency and charge-off problems were primarily associated with lower-paid, low-to midlevel enlisted military personnel. A significant relationship also existed between travel card fraud, abuse, and delinquencies and individuals with substantial credit history problems. For example, some cardholders had accounts placed in collections while others had filed bankruptcies prior to receiving the card. The Navy's practice of authorizing issuance of the travel card to virtually anyone who asked for it compounded these problems. We also found inconsistent documented evidence of disciplinary actions against cardholders who wrote NSF checks, or had their accounts charged off or put in salary offset. Further, almost one-half of these cardholders still had, as of August 2002, active secret or top-secret clearances. Other control breakdowns related to the Navy's failure to provide the necessary staffing and training for effective oversight, and infrequent, or nonexistent, monitoring of travel card activities. As a result of these and similar findings in the Army travel card program, the recently enacted fiscal year 2003 Defense Appropriations Act included provisions requiring the Secretary of Defense to establish guidelines and procedures for disciplinary actions and to deny issuance of the travel card to individuals who are not creditworthy.
In March 2008, then Deputy Attorney General Craig Morford issued a memorandum—also known as the “Morford Memo”—to help ensure th the monitor selection process is collaborative, results in the selection of a highly-qualified monitor suitable for the assignment, avoids potential conflicts of interest, and is carried out in a manner that instills public confidence. The Morford Memo requires USAOs and other DOJ litiga divisions to establish ad hoc or standing committees consisting of the office’s ethics advisor, criminal or section chief, and at least one other experienced prosecutor to consider the candidates—which may be proposed by either prosecutors, companies, or both—for each monitorship. DOJ components are also reminded to follow spec ified federal conflict of interest guidelines and to check monitor candidates for tion potential conflicts of interest relationships with the company. In addition, the names of all selected monitors for DPAs and NPAs must be submitted to ODAG for final approval. Following issuance of the Morford Memo, DOJ entered into 35 DPAs and NPAs, 6 of which required the company to hire an individual to oversee the company’s compliance with the terms of the DPA. As of November 2009, DOJ had selected monitors for 4 of the 6 agreements. Based on o discussions with prosecutors and documentation from DOJ, we determined that for these 4 agreements, DOJ made the selections accordance with Morford Memo guidelines. Further, while the Morfo Memo does not specify a selection process that must be used in all cases it suggests that in some cases it may be appropriate for the company to select the monitor or propose a pool of qualified candidates from which DOJ will select the monitor. In all 4 of these cases, the company either selected the monitor, subject to DOJ’s approval, or provided DOJ with proposed monitor candidates from among which DOJ selected the monitor. However, while we were able to determine that the prosec complied with the Morford Memo based on information obtained through our interviews, DOJ did not fully document the selection and approval process for 2 of the 4 monitor selections. The lack of such documentati will make it difficult for DOJ to validate to an independent third-party s reviewer, as well as to Congress and the public, that prosecutors acros DOJ offices followed Morford Memo guidelines and that monitors were selected in a way that was fair and merit based. For example, for 1 of these 2 agreements, DOJ did not document who in the U.S. Attorney’s Office was on involved in reviewing the monitor candidates, which is important becaus e the Morford Memo requires that certain individuals in the office be part of the committee to consider the selection or veto of monitor candidates in order to ensure monitors are not selected unilaterally. For the second agreement, the Deputy Attorney General’s approval of the selected monitor was relayed via telephone and not documented. As a result, in order to respond to our inquiries, DOJ officials had to reach out to individuals who were involved in the telephone call, one of whom w longer a DOJ employee, to obtain information regarding the monitor’s approval. Documenting the reasons for selecting a particular monitor helps avoid the appear and practices—which are intended to instill public confidence in the monitor selection process—were followed. Therefore, in our June 25, 2009, testimony, we recommended that the Deputy Attorney General ado or internal procedures to document both the process used and reasons f monitor selection decisions. DOJ agreed with our recommendation an in August 2009, instituted such procedures. Specifically, DOJ requires ODAG to complete a checklist confirming receipt of the monitor selectio submission—including the process used and reasons for selecting the monitor—from the DOJ component; ODAG’s review, recommendation,ance of favoritism and verifies that Morford Memo processes and decision to either approve or reject the proposed monitor; the DOJ component’s notification of ODAG’s decision; and ODAG’s documentat of these steps. For the two monitors selected during or after August 2009, DOJ provided us with completed checklists to confirm that ODAG had followed the new procedures. While DOJ selected monitors in accordance with the Morford Memo, monitor selections have been d after the Morford Memo was issued. The selection of one monitor took 15 months from the time the agreement was signed and selection of two monitors, as discussed above, has been delayed for more than 17 months elayed for three agreements entered into from the time the agreement was signed. According to DOJ, the delays in selecting these three monitors have been due to challenges in identifyi ng candidates with proper experience and resources who also do not have potential conflicts of interest with the company. Further, DOJ’s selection of monitors in these three cases took more time than its selection of monitors both prior to and since the issuance of the Morford Memo— which on average was about 2 months from the time the NPA or DPA was signed or filed. According to the Senior Counsel to the Assistant Attorney General for the Criminal Division, for these three agreements, the prosecutors overseeing the cases have communicated with the companies to ensure that they are complying with the agreements. Further, DOJ reported that the prosecutors are working with each of the companies to extend the duration of the DPAs to ensure that the duties and goals of each monitorship are fulfilled and, as of October 2009, an agreement to extend the monitorship had been signed for one of the DPAs. Such action by DOJ will better position it to ensure that the companies are in compliance with the agreements while awaiting the selections of the monitors. For the 48 DPAs and NPAs where DOJ required independent monitors, companies have hired a total of 42 different monitors, more than half of whom were former DOJ employees. Specifically, of these 42 monitors, 23 previously worked at DOJ, while 13 did not. The 23 monitors held various DOJ positions, including Assistant U.S. Attorney, Section Chief or Division Chief in a litigating component, U.S. Attorney, Assistant Attorney General, and Attorney General. The length of time between the monitor’s separation from DOJ and selection as monitor ranged from 1 year to more than 30 years, with an average of 13 years. Five individuals were selected to serve as monitors within 3 years or less of being employed at DOJ. In addition, 8 of these 23 monitors had previously worked in the USAO or DOJ litigating component that oversaw the DPA or NPA for which they were the monitor. In these 8 cases, the length of time between the monitor’s separation from DOJ and selection as monitor ranged from 3 years to 34 years, with an average of almost15 years. Of the remaining 13 monitors with no previous DOJ experience, 6 had previous experience at a state or local government agency, for example, as a prosecutor in a district attorney’s office; 3 had worked in federal agencies other than DOJ, including the Securities and Exchange Commission and the Office of Management and Budget; 2 were former judges; 2 were attorneys in the military; 3 had worked solely in private practice in a law firm; and 1 had worked as a full-time professor. Of the 13 company representatives with whom we spoke who were required to hire independent monitors, in providing perspectives on monitors’ previous experience, representatives from 5 of these companies stated that prior employment at DOJ or an association with a DOJ employee could impede the monitor’s independence and impartiality, whereas representatives from the other 8 companies disagreed. Specific concerns raised by the 5 companies—2 of which had monitors with prior DOJ experience—included the possibility that the monitor would favor DOJ and have a negative predisposition toward the company or, if the monitor recently left DOJ, the monitor may not be considered independent; however, none of the companies identified specific instances with their monitors where this had occurred. Of the remaining 8 company representatives who did not identify concerns, 6 of them worked with monitors who were former DOJ employees, and some of these officials commented on their monitors’ fairness and breadth of experience. In addition 5 company representatives we spoke with who were involved in the monitor selection process said that they were specifically looking for monitors with DOJ experience and knowledge of the specific area of law that the company violated. Officials from 8 of the 13 companies with whom we spoke raised concerns about their monitors, which were either related to how monitors were carrying out their responsibilities or issues regarding the overall cost of the monitorship. However, these companies said that it was unclear to what extent DOJ could help to address these concerns. Seven of the 13 companies identified concerns about the scope of the monitor’s responsibilities or the amount of work the monitor completed. For example, 1 company said that the monitor had a large number of staff assisting him on the engagement, and he and his staff attended more meetings than the company felt was necessary, some of which were unrelated to the monitor responsibilities delineated in the agreement, such as a community service organization meeting held at the company when the DPA was related to securities fraud. As a result, the company believes that the overall cost of the monitorship—with 20 to 30 lawyers billing the company each day—was higher than necessary. Another company stated that its monitor did not complete the work required in the agreement in the first phase of the monitorship—including failing to submit semi-annual reports on the company’s compliance with the agreement to DOJ during the first 2 years of the monitorship— resulting in the monitor having to complete more work than the company anticipated in the final phase of the monitorship. According to the company, this led to unexpectedly high costs in proportion to the company’s revenue in the final phase, which was significant because the company is small. Further, according to a company official, the monitor’s first report contained numerous errors that the company did not have sufficient time to correct before the report was submitted to DOJ and, thus, DOJ received a report containing errors. While 6 of the 13 companies we interviewed did not express concerns about the monitor’s rates, 3 companies expressed concern that the monitor’s rate (which ranged from $290 per hour to a rate of $695 to $895 per hour among the companies that responded to our survey) was high. Further, while 9 of the 13 companies that responded to our survey believed that the total compensation received by the monitor or monitoring firm was reasonable for the type and amount of work performed (which, according to the companies that responded to our survey, ranged from $8,000 to $2.1 million per month), 3 companies did not believe it was reasonable. When asked how they worked to resolve these issues with the monitor, companies reported that they were unaware of any mechanisms available to resolve the issues—including DOJ involvement—or if they were aware that DOJ could get involved they were reluctant to seek DOJ’s assistance. Specifically, three of the eight companies that identified concerns with their monitor were not aware of any mechanism in place to raise these concerns with DOJ. Four companies were aware that they could raise these concerns with DOJ, but three of these companies said that they would be reluctant to raise these issues with DOJ in fear of repercussions. Another company did not believe that DOJ had the authority to address their concerns because they were related to staffing costs, which were delineated in the contract negotiated between the company and the monitor, not the DPA. However, DOJ had a different perspective than the company officials on its involvement in resolving disputes between companies and monitors. According to the Senior Counsel to the ODAG, while DOJ has not established a mechanism through which companies can raise concerns with their monitors to DOJ and clearly communicated to companies how they should do so, companies are aware that they can raise monitor- related concerns to DOJ if needed. Further, it was the Senior Counsel’s understanding that companies frequently raise issues regarding DPAs and NPAs to DOJ without concerns about retribution, although to his knowledge, no companies had ever raised monitor-related concerns to ODAG. The Senior Counsel acknowledged, however, that even if companies did raise concerns to DOJ regarding their monitors, the point in the DPA process at which they did so may determine the extent of DOJ’s involvement. Specifically, according to this official, while he believed that DOJ may be able to help resolve a dispute after the company and monitor enter into a contract, he stated that, because DOJ is not a party to the contract, if a conflict were to arise over, for instance, the monitor’s failure to complete periodic reports, DOJ could not compel the monitor to complete the reports, even if the requirement to submit periodic reports was established in the DPA or NPA. In contrast, the Senior Counsel said that if the issues between monitors and companies arise prior to the two parties entering into a contract, such as during the fee negotiation phase, DOJ may be able to play a greater role in resolving the conflict. However, the mechanisms that DOJ could use to resolve such issues with the monitor are uncertain since while the monitor’s role is delineated in the DPA, there is no contractual agreement between DOJ and the monitor. DOJ is not a party to the monitoring contract signed by the company and the monitor, and the monitor is not a party to the DPA signed by DOJ and the company. We are aware of at least one case in which the company sought DOJ’s assistance in addressing a conflict with the monitor regarding fees, prior to the monitor and company signing their contract. Specifically, one company raised concerns about the monitor to the U.S. Attorney handling the case, stating that, among other things, the company believed the monitor’s fee arrangement was unreasonably high and the monitor’s proposed billing arrangements were not transparent. The U.S. Attorney declined to intervene in the dispute stating that it was still at a point at which the company and the monitor could resolve it. The U.S. Attorney instructed the company to quickly resolve the dispute directly with the monitor—noting that otherwise, the dispute might distract the company and the monitor from resolving the criminal matters that were the focus of the DPA. The U.S. Attorney also asked the company to provide an update on its progress in resolving the conflict the following week. A legal representative of the company stated that he did not believe he had any other avenue for addressing this dispute after the U.S. Attorney declined to intervene. As a result, although the company disagreed with the high fees, it signed the contract because it did not want to begin the monitorship with a poor relationship with the monitor resulting from a continued fee dispute. The Senior Counsel to the ODAG stated that because the company is signatory to both the DPA or NPA and the contract with the monitor, it is the company’s responsibility to ensure that the monitor is performing the duties described in the agreement. However, 5 of the 7 companies that had concerns about the scope of the monitor’s responsibilities or the amount of work the monitor completed did not feel as if they could adequately address their issues by discussing them with the monitors. This is because two companies said that they lacked leverage to address issues with monitors and two companies feared repercussions if they raised issues with their monitors. The Senior Counsel stated that one way the company could hold the monitor accountable is by incorporating the monitor requirements listed in the DPA into the monitoring contract and additionally include a provision in the contract that the monitor can be terminated for not meeting these requirements. However, the companies that responded to our survey did not generally include monitor termination provisions in their contracts. Specifically, 7 of the 13 companies that responded to our survey reported that their monitoring contract contained no provisions regarding termination of the monitor, and another 3 companies reported that their contract contained a clause that actually prohibited the company from terminating the monitor. Only 1 company that responded to our survey reported that the contract allowed it to terminate the monitor with written notice at any time, once the company and DOJ agreed (and subject to the company’s obligation to pay the monitor). This contract also included a provision allowing for the use of arbitration to resolve disputes between the company and the monitor over, for instance, services rendered and fees. In order to more consistently include such termination clauses in the monitoring contracts, companies would need the monitor’s consent. Given that DOJ makes the final decision regarding the selection of a particular monitor—and that DOJ allows for, but does not require, company involvement in the monitor selection process—it is uncertain how much leverage the company would have to negotiate that such termination or dispute resolution terms be included in the contract with the monitor. Because monitors are one mechanism that DOJ uses to ensure that companies are reforming and meeting the goals of DPAs and NPAs, DOJ has an interest in monitors performing their duties properly. While over the course of our review, we discussed with DOJ officials various mechanisms by which conflicts between companies and monitors could be resolved, including when it would be appropriate for DOJ to be involved, DOJ officials acknowledged that prosecutors may not be having similar discussions with companies about resolving conflict. This could lead to differing perspectives between DOJ and companies on how such issues should be addressed. Internal control standards state that agency management should ensure that there are adequate means of communicating with, and obtaining information from, external stakeholders that may have a significant impact on the agency achieving its goals. According to DOJ officials, the Criminal Division Fraud Section has made some efforts to clarify what role it will play in resolving disputes between the company and the monitor. For example, 11 of 17 DPAs or NPAs entered into by the Fraud Section that required monitors allowed companies to bring to DOJ’s attention any disputes over implementing recommendations made by monitors during the course of their reviews of company compliance with DPAs and NPAs. In addition, 8 of these 11 agreements provide for DOJ to resolve disputes between the company and the monitor related to the work plan the monitor submitted to DOJ and the company before beginning its review of the company. Additionally, in 5 agreements entered into by one USAO, the agreement specified that the company could bring concerns about unreasonable costs of outside professionals—such as accountants or consultants—hired by the monitor to the USAO for dispute resolution. While the Criminal Division Fraud Section and one USAO have made efforts to articulate in the DPA or NPA the extent to which DOJ would be willing to be involved in resolving specific kinds of monitor issues for that particular case, other DOJ litigating divisions and USAOs that entered into DPAs and NPAs have not. Clearly communicating to companies and monitors in each DPA and NPA the role DOJ will play in addressing companies’ disputes with monitors would help better position DOJ to be notified of potential issues companies have identified related to monitor performance. According to DOJ, DPAs and NPAs can be invaluable tools for fighting corporate corruption and helping to rehabilitate a company, although use of these agreements has not been without controversy. DOJ has taken steps to address concerns that monitors are selected based on favoritism or bias by developing and subsequently adhering to the Morford Memo guidelines. However, once the monitors are selected and any issues—such as fee disputes or concerns with the amount of work the monitor is completing—arise between the monitor and the company, it is not always clear what role, if any, DOJ will play in helping to resolve these issues. Clearly communicating to companies and monitors the role DOJ will play in addressing companies’ disputes with monitors would help better position DOJ to be made aware of issues companies have identified related to monitor performance, which is of interest to DOJ since it relies on monitors to assess companies’ compliance with DPAs and NPAs. We are continuing to assess the potential need for additional guidance or other improvements in the use of DPAs and NPAs in our ongoing work. To provide clarity regarding DOJ’s role in resolving disputes between companies and monitors, the Attorney General should direct all litigating components and U.S. Attorneys Offices to explain in each corporate DPA or NPA what role DOJ could play in resolving such disputes, given the facts and circumstances of the case. We requested comments on a draft of this statement from DOJ. DOJ did not provide official written comments to include in the statement. However, in an email sent to us on November 17, 2009, DOJ provided technical comments, which we incorporated into the statement, as appropriate. For questions about this statement, please contact Eileen R. Larence at (202) 512-8777 or larencee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Kristy N. Brown, Jill Evancho, Tom Jessor, Sarah Kaczmarek, Danielle Pakdaman, and Janet Temko, as well as Katherine Davis and Amanda Miller. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Recent cases of corporate fraud and mismanagement heighten the Department of Justice's (DOJ) need to appropriately punish and deter corporate crime. Recently, DOJ has made more use of deferred prosecution and non-prosecution agreements (DPAs and NPAs), in which prosecutors may require company reform, among other things, in exchange for deferring prosecution, and may also require companies to hire an independent monitor to oversee compliance. This testimony addresses (1) the extent to which prosecutors adhered to DOJ's monitor selection guidelines, (2) the prior work experience of monitors and companies' opinions of this experience, and (3) the extent to which companies raised concerns about their monitors, and whether DOJ had defined its role in resolving these concerns. Among other steps, GAO reviewed DOJ guidance and examined the 152 agreements negotiated from 1993 (when the first 2 were signed) through September 2009. GAO also interviewed DOJ officials, obtained information on the prior work experience of monitors who had been selected, and interviewed representatives from 13 companies with agreements that required monitors. These results, while not generalizable, provide insights into monitor selection and oversight. Prosecutors adhered to DOJ guidance issued in March 2008 in selecting monitors required under agreements entered into since that time. Monitor selections in two cases have not yet been made due to challenges in identifying candidates with proper experience and resources and without potential conflicts of interests with the companies. DOJ issued guidance in March 2008 to help ensure that the monitor selection process is collaborative and based on merit; this guidance also requires prosecutors to obtain Deputy Attorney General approval for the monitor selection. For DPAs and NPAs requiring independent monitors, companies hired a total of 42 different individuals to oversee the agreements; 23 of the 42 monitors had previous experience working for DOJ--which some companies valued in a monitor choice--and those without prior DOJ experience had worked in other federal, state, or local government agencies, the private sector, or academia. The length of time between the monitor's leaving DOJ and selection as a monitor ranged from 1 year to over 30 years, with an average of 13 years. While most of the companies we interviewed did not express concerns about monitors having prior DOJ experience, some companies raised general concerns about potential impediments to independence or impartiality if the monitor had previously worked for DOJ or had associations with DOJ officials. Representatives for more than half of the 13 companies with whom GAO spoke raised concerns about the monitor's cost, scope, and amount of work completed--including the completion of compliance reports required in the DPA or NPA--and were unclear as to the extent DOJ could be involved in resolving such disputes, but DOJ has not clearly communicated to companies its role in resolving such concerns. Companies and DOJ have different perceptions about the extent to which DOJ can help to resolve monitor disputes. DOJ officials GAO interviewed said that companies should take responsibility for negotiating the monitor's contract and ensuring the monitor is performing its duties, but that DOJ is willing to become involved in monitor disputes. However, some company officials were unaware that they could raise monitor concerns to DOJ or were reluctant to do so. Internal control standards state that agency management should ensure there are adequate means of communicating with, and obtaining information from, external stakeholders that may have a significant impact on the agency achieving its goals. While one of the DOJ litigating divisions and one U.S. Attorney's Office have made efforts to articulate in the DPAs and NPAs what role they could play in resolving monitor issues, other DOJ litigation divisions and U.S. Attorney's Offices have not done so. Clearly communicating to companies the role DOJ will play in addressing companies' disputes with monitors would help increase awareness among companies and better position DOJ to be notified of potential issues related to monitor performance.
Federal statutes provide for civil and criminal penalties for the production, advertising, possession, receipt, distribution, and sale of child pornography. Of particular relevance to this report, the child pornography statutes prohibit the use of any means of interstate or foreign commerce (which will typically include the use of an interactive computer service) to sell, advertise, distribute, receive, or possess child pornography. Additionally, federal obscenity statutes prohibit the use of any means of interstate or foreign commerce or an interactive computer service to import, transport, or distribute obscene material or to transfer obscene material to persons under the age of 16. Child pornography is defined by statute as the visual depiction of a minor—a person under 18 years of age—engaged in sexually explicit conduct. By contrast, for material to be defined as obscene depends on whether an average person, applying contemporary community standards, would interpret the work—including images—to appeal to the prurient interest and to be patently offensive, and whether a reasonable person would find the material lacks serious literary, artistic, political, or scientific value. In addition to making it a crime to transport, receive, sell, distribute, advertise, or possess child pornography in interstate or foreign commerce, federal child pornography statutes prohibit, among other things, the use of a minor in producing pornography, and they provide for criminal and civil forfeiture of real and personal property used in making child pornography and of the profits of child pornography. Child pornography, which is intrinsically related to the sexual abuse of children, is unprotected by the First Amendment. Nor does the First Amendment protect the production, distribution, or transfer of obscene material. In enacting the Child Pornography Prevention Act of 1996, Congress sought to expand the federal prohibition against child pornography from images that involve actual children to sexually explicit images that only appear to depict minors but were produced without using any real children. The act defines child pornography as “any visual depiction, including any photograph, film, video, picture, or computer or computer- generated image or picture” that “is, or appears to be, of a minor engaging in sexually explicit conduct” or is “advertised, promoted, presented, described, or distributed in such a manner that conveys the impression that the material is or contains a visual depiction of a minor engaging in sexually explicit conduct.” Last year, the Supreme Court struck down this legislative attempt to ban “virtual” child pornography in Ashcroft v. The Free Speech Coalition, ruling that the expansion of the act to material that did not involve and thus harm actual children in its creation is an unconstitutional violation of free speech rights. According to government officials, this ruling may increase the difficulty faced by law enforcement agencies in prosecuting those who produce and possess child pornography. Since the government must establish that the digital images of children engaged in sexual acts are those of real children, it may be difficult to prosecute cases in which the defendants claim that the images in question are of “virtual” children. Historically, pornography, including child pornography, tended to be found mainly in photographs, magazines, and videos. The arrival and the rapid expansion of the Internet and its technologies, the increased availability of broadband Internet services, advances in digital imaging technologies, and the availability of powerful digital graphic programs have brought about major changes in both the volume and the nature of available child pornography. The proliferation of child pornography on the Internet is prompting wide concern. According to a recent survey, over 90 percent of Americans say they are concerned about child pornography on the Internet, and 50 percent of Americans cite child pornography as the single most heinous crime that takes place on line. According to experts, pornographers have traditionally exploited—and sometimes pioneered—emerging communication technologies—from the dial-in bulletin board systems of the 1970s to the World Wide Web—to access, trade, and distribute pornography, including child pornography. Today, child pornography is available through virtually every Internet technology (see table 1). Among the principal channels for the distribution of child pornography are commercial Web sites, Usenet newsgroups, and peer-to-peer networks. Web sites. According to recent estimates, there are about 400,000 commercial pornography Web sites worldwide, with some of the sites selling pornographic images of children. The profitability and the worldwide reach of the child pornography trade was recently demonstrated by an international child pornography ring that included a Texas-based firm providing credit card billing and password access services for one Russian and two Indonesian child pornography Web sites. According to the U.S. Postal Inspection Service, the ring grossed as much as $1.4 million in just 1 month selling child pornography to paying customers. Usenet. Usenet newsgroups are also providing access to pornography, with several of the image-oriented newsgroups being focused on child erotica and child pornography. These newsgroups are frequently used by commercial pornographers who post “free” images to advertise adult and child pornography available for a fee from their Web sites. The increase in the availability of child pornography in Usenet newsgroups represents a change from the mid-1990’s, when a 1995–96 study of 9,800 randomly selected images taken from 32 Usenet newsgroups found that only a small fraction of posted images contained child pornography themes. Peer-to-peer networks. Although peer-to-peer file-sharing programs are largely known for the extensive sharing of copyrighted digital music, they are emerging as a conduit for the sharing of child pornography images and videos. A recent study by congressional staff found that one use of file- sharing programs is to exchange pornographic materials, such as adult videos. The study found that a single search for the term “porn” using a similar file-sharing program yielded over 25,000 files, more than 10,000 of which were video files appearing to contain pornographic images. In another study, focused on the availability of pornographic video files on peer-to-peer sharing networks, a sample of 507 pornographic video files retrieved with a file-sharing program included about 3.7 percent child pornography videos. Table 2 shows the key national organizations and agencies that are currently involved in efforts to combat child pornography on peer-to-peer networks. The National Center for Missing and Exploited Children (NCMEC), a federally funded nonprofit organization, serves as a national resource center for information related to crimes against children. Its mission is to find missing children and prevent child victimization. The center’s Exploited Child Unit operates the CyberTipline, which receives child pornography tips provided by the public; its CyberTipline II also receives tips from Internet service providers. The Exploited Child Unit investigates and processes tips to determine if the images in question constitute a violation of child pornography laws. The CyberTipline provides investigative leads to the Federal Bureau of Investigation (FBI), U.S. Customs, the Postal Inspection Service, and state and local law enforcement agencies. The FBI and the U.S. Customs also investigate leads from Internet service providers via the Exploited Child Unit’s CyberTipline II. The FBI, Customs Service, Postal Inspection Service, and Secret Service have staff assigned directly to NCMEC as analysts. Two organizations in the Department of Justice have responsibilities regarding child pornography: the FBI and the Justice Criminal Division’s Child Exploitation and Obscenity Section (CEOS). The FBI investigates various crimes against children, including federal child pornography crimes involving interstate or foreign commerce. It deals with violations of child pornography laws related to the production of child pornography; selling or buying children for use in child pornography; and the transportation, shipment, or distribution of child pornography by any means, including by computer. CEOS prosecutes child sex offenses and trafficking in women and children for sexual exploitation. Its mission includes prosecution of individuals who possess, manufacture, produce, or distribute child pornography; use the Internet to lure children to engage in prohibited sexual conduct; or traffic in women and children interstate or internationally to engage in sexually explicit conduct. Two organizations in the Department of the Treasury have responsibilities regarding child pornography: the Customs Service and the Secret Service. The Customs Service targets illegal importation and trafficking in child pornography and is the country’s front line of defense in combating child pornography distributed through various channels, including the Internet. Customs is involved in cases with international links, focusing on pornography that enters the United States from foreign countries. The Customs CyberSmuggling Center has the lead in the investigation of international and domestic criminal activities conducted on or facilitated by the Internet, including the sharing and distribution of child pornography on peer-to-peer networks. Customs maintains a reporting link with NCMEC, and it acts on tips received via the CyberTipline from callers reporting instances of child pornography on Web sites, Usenet newsgroups, chat rooms, or the computers of users of peer-to-peer networks. The center also investigates leads from Internet service providers via the Exploited Child Unit’s CyberTipline II. The U.S. Secret Service does not investigate child pornography cases on peer-to-peer networks; however, it does provide forensic and technical support to NCMEC, as well as to state and local agencies involved in cases of missing and exploited children. In November 2002, we reported that federal agencies are effectively coordinating their efforts to combat child pornography, and we recommended that the Attorney General designate the Postal Inspection Service and Secret Service as agencies that should receive reports and tips of child pornography under the Protection of Children from Sexual Predators Act of 1998 in addition to the FBI and Customs. The Department of Justice, while agreeing with our finding that federal agencies have mechanisms in place to coordinate their efforts, did not fully support our conclusion and recommendation that federal coordination efforts would be further enhanced if the Postal Inspection Service and the Secret Service were provided direct access to tips reported to NCMEC by remote computing service and electronic communication service providers. Justice said that the FBI and Customs, the agencies that currently have direct access, can and do share these tips with the Secret Service and the Postal Inspection Service, as appropriate, and Justice believes that this coordination has been effective. Justice questioned whether coordination would be further enhanced by having the Secret Service and the Postal Inspection Service designated to receive access to these tips directly from NCMEC; however, Justice said that it is studying this issue as it finalizes regulations implementing the statute. Child pornography is easily shared and accessed through peer-to-peer file- sharing programs. Our analysis of 1,286 titles and file names identified through KaZaA searches on 12 keywords showed that 543 (about 42 percent) of the images had titles and file names associated with child pornography images. Of the remaining files, 34 percent were classified as adult pornography, and 24 percent as nonpornographic (see fig. 1). No files were downloaded for this analysis. The ease of access to child pornography files was further documented by retrieval and analysis of image files, performed on our behalf by the Customs CyberSmuggling Center. Using 3 of the 12 keywords that we used to document the availability of child pornography files, a CyberSmuggling Center analyst used KaZaA to search, identify, and download 305 files, including files containing multiple images and duplicates. The analyst was able to download 341 images from the 305 files identified through the KaZaA search. The CyberSmuggling Center analysis of the 341 downloaded images showed that 149 (about 44 percent) of the downloaded images contained child pornography (see fig. 2). The center classified the remaining images as child erotica (13 percent), adult pornography (29 percent), or nonpornographic (14 percent). These results are consistent with the observations of NCMEC, which has stated that peer-to-peer technology is increasingly popular for the dissemination of child pornography. However, it is not the most prominent source for child pornography. As shown in table 3, since 1998, most of the child pornography referred by the public to the CyberTipline was found on Internet Web sites. Since 1998, the center has received over 76,000 reports of child pornography, of which 77 percent concerned Web sites, and only 1 percent concerned peer-to-peer networks. Web site referrals have grown from about 1,400 in 1998 to over 26,000 in 2002—or about a nineteenfold increase. NCMEC did not track peer-to-peer referrals until 2001. In 2002, peer-to-peer referrals increased more than fourfold, from 156 to 757, reflecting the increased popularity of file-sharing programs. Juvenile users of peer-to-peer networks face a significant risk of inadvertent exposure to pornography when searching and downloading images. In a search using innocuous keywords likely to be used by juveniles searching peer-to-peer networks (such as names of popular singers, actors, and cartoon characters), almost half of the images downloaded were classified as adult or cartoon pornography. Juvenile users may also be inadvertently exposed to child pornography through such searches, but the risk of such exposure is smaller than that of exposure to pornography in general. To document the risk of inadvertent exposure of juvenile users to pornography, the Customs CyberSmuggling Center performed KaZaA searches using innocuous keywords that would likely be used by juveniles. The center image searches used three keywords representing the names of a popular female singer, child actors, and a cartoon character. A center analyst performed the search, retrieval, and analysis of the images, each of which was classified into one of five categories: child pornography, child erotica, adult pornography, cartoon pornography, or nonpornographic. The searches produced 157 files, some of which were duplicates. The analyst was able to download 177 images from the 157 files identified through the search. As shown in figure 3, our analysis of the CyberSmuggling Center’s classification of the 177 downloaded images determined that 61 images contained adult pornography (34 percent), 24 images consisted of cartoon pornography (14 percent), 13 images contained child erotica (7 percent), and 2 images (1 percent) contained child pornography. The remaining 77 images were classified as nonpornographic. Because law enforcement agencies do not track the resources dedicated to specific technologies used to access and download child pornography on the Internet, we were unable to quantify the resources devoted to investigations concerning peer-to-peer networks. These agencies (including the FBI, CEOS, and Customs) do devote significant resources to combating child exploitation and child pornography in general. Law enforcement officials told us, however, that as tips concerning child pornography on the peer-to-peer networks increase, they are beginning to focus more law enforcement resources on this issue. In fiscal year 2002, the key organizations involved in combating child pornography on peer-to-peer networks reported the following levels of funding: NCMEC received about $12 million for its congressionally mandated role as the national resource center and clearinghouse. NCMEC also received about $10 million for law enforcement training and about $3.3 million for the Exploited Child Unit and the promotion of its CyberTipline. From the appropriated amounts, NCMEC allocated $916,000 to combat child pornography and referred 913 tips concerning peer-to-peer networks to law enforcement agencies. The FBI allocated $38.2 million and 228 agents and support personnel to combat child pornography through its Innocent Images unit. Since fiscal year 1996, the Innocent Image National Initiative opened 7,067 cases, obtained 1,811 indictments, performed 1,886 arrests, and secured 1,850 convictions or pretrial diversions in child pornography cases. According to FBI officials, they are aware of the use of peer-to-peer networks to disseminate child pornography and have efforts under way to work with some of the peer-to-peer companies to solicit their cooperation in dealing with this issue. CEOS allocated $4.38 million and 28 personnel to combat child exploitation and obscenity offenses. It has recently launched an effort, the High Tech Investigative Unit, dealing with investigating any Internet medium that distributes child pornography, including peer-to-peer networks. Customs allocated $15.6 million and over 144,000 hours to combating child exploitation and obscenity offenses. The CyberSmuggling Center is beginning to actively monitor the file sharing of child pornography on peer-to-peer networks and is devoting one half-time investigator to this effort. As of December 16, 2002, the center has sent 21 peer-to-peer investigative leads to the field offices for follow-up action. Four of these leads have search warrants pending, two have been referred to local law enforcement, and five have been referred to foreign law enforcement agencies. In addition, to facilitate the identification of the victims of child pornographers, the CyberSmuggling Center is devoting resources to the National Child Victim Identification Program, a consolidated information system containing seized images that is designed to allow law enforcement officials to quickly identify and combat the current abuse of children associated with the production of child pornography. The system’s database is being populated with all known and unique child pornographic images obtained from national and international law enforcement sources and from CyberTipline reports filed with NCMEC. It will initially hold over 100,000 images that have been collected by federal law enforcement agencies from various sources, including old child pornography magazines. According to Customs officials, this information will help, among other things, to determine whether actual children were used to produce child pornography images by matching them with images of children from magazines published before modern imaging technology was invented. Such evidence can be used to counter the assertion that only virtual children appear in certain images. The system is housed at the Customs CyberSmuggling Center and is to be accessed remotely in “read only” format by the FBI, CEOS, the U.S. Postal Inspection Service, and NCMEC. An initial version of the system was deployed at the Customs CyberSmuggling Center in September 2002; the system became operational in January 2003. It is easy to access and download child pornography on peer-to-peer networks. Juvenile users of peer-to-peer networks also face a significant risk of inadvertent exposure to pornography, including child pornography. We were unable to determine the extent of federal law enforcement resources available for combating child pornography on peer-to-peer networks; the key law enforcement agencies devote resources to combating child exploitation and child pornography in general, but they do not track the resources dedicated to peer-to-peer technologies in particular. The Assistant Attorney General, Criminal Division, Department of Justice, provided written comments on a draft of this report, which are reprinted in appendix III. The Department of Justice agreed with the report’s findings, provided additional information on the mission and capabilities of the High Tech Investigative Unit (part of its Criminal Division’s Child Exploitation and Obscenity Section), and offered comments on the description and purpose of Customs’ National Child Victim Identification Program. In response, we have revised our report to add these clarifications. We also received written technical comments from the Department of Justice, which we have incorporated as appropriate. We received written technical comments from the Assistant Director, Office of Inspection, U.S. Secret Service, and from the Acting Director, Office of Planning, U.S. Customs Service. Their comments have been incorporated in the report as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Chairmen and Ranking Minority Members of other Senate and House committees and subcommittees that have jurisdiction and oversight responsibility for the Departments of Justice and the Treasury. We will also send copies to the Attorney General and to the Secretary of the Treasury. Copies will be made available to others on request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions concerning this report, please call me at (202) 512-6240 or Mirko J. Dolak, Assistant Director, at (202) 512-6362. We can be also reached by E-mail at koontzl@gao.gov and dolakm@gao.gov, respectively. Key contributors to this report were Barbara S. Collier, James M. Lager, Neelaxi V. Lakhmani, James R. Sweetman, Jr., and Jessie Thomas. determine the ease of access to child pornography on peer-to-peer assess the risk of inadvertent exposure of juvenile users of peer-to-peer networks to pornography, including child pornography, and determine the extent of federal law enforcement resources available for combating child pornography on peer-to-peer networks. To determine the availability of child pornography on peer-to-peer networks, we used a popular peer-to-peer application—KaZaA—to search for and identify image files that appear to be child pornography. Our analysts used keywords provided by the Customs CyberSmuggling Center. These keywords were intended to identify pornographic images; examples of the keywords include preteen, underage, and incest. Once the names and titles of image files were gathered, we classified and analyzed them based on file names and keywords. Each file was classified as child pornography, adult pornography, or nonpornographic. For a file to be considered possible child pornography, the title, file name, or both had to include at least one word with a sexual connotation and an age-related keyword indicating that the subject is a minor. Files depicting adult pornography included any file that had words of a sexual nature in the title or file name. No files were downloaded for this analysis. To determine the ease of access, we used three keywords from the initial list to perform another search. The resulting files were downloaded, saved, and analyzed by a Customs agent. Because child pornography cannot be accessed legally other than by law enforcement agencies, we relied on Customs to download and analyze files. Our own analyses were based on keywords and file names only. The Customs agent classified each of the downloaded files into one of four categories: child pornography, child erotica, adult pornography, or nonpornographic. The user with the largest number of shared files that appeared to be child pornography was also identified, and the shared folder was captured. The titles and names of files in the user’s shared directory were then analyzed and classified by a GAO analyst using the same classification criteria used in original analysis. To assess the risk of inadvertent exposure of juvenile users of peer-to-peer networks to pornography, a CyberSmuggling Center analyst conducted another search using three keywords that are names of popular celebrities and a cartoon character. The Customs analyst performed the search, retrieval, and analysis of the images. Each of the images downloaded was classified into one of five categories: adult pornography, child pornography, child erotica, cartoon pornography, or nonpornographic. To determine what federal law enforcement resources were allocated to combating child pornography on peer-to-peer networks, we obtained resource allocation data and interviewed officials at the U.S. Customs Service, the Department of Justice’s Child Exploitation and Obscenity Section, and the Federal Bureau of Investigation. We also received information about what resources were being allocated to combat child pornography from the U.S. Secret Service and the National Center for Missing and Exploited Children. We performed our work between July and October 2002 at the U.S. Secret Service in Baltimore, Maryland, and the U.S. Customs Service, Customs CyberSmuggling Center, in Fairfax, Virginia, under the Department of the Treasury; and at the Child Exploitation and Obscenity Section and the Federal Bureau of Investigation, under the Department of Justice, in Washington, D.C. We also worked with the National Center for Missing and Exploited Children in Alexandria, Virginia. Our work was conducted in accordance with generally accepted government auditing standards. Peer-to-peer file-sharing programs represent a major change in the way Internet users find and exchange information. Under the traditional Internet client/server model, the access to information and services is accomplished by the interaction between users (clients) and servers— usually Web sites or portals. A client is defined as a requester of services, and a server is defined as the provider of services. Unlike the traditional model, the peer-to-peer model enables consenting users—or peers—to directly interact and share information with each other without the intervention of a server. A common characteristic of peer-to-peer programs is that they build virtual networks with their own mechanisms for routing message traffic. The ability of peer-to-peer networks to provide services and connect users directly has resulted in a large number of powerful applications built around this model. These range from the SETI@home network (where users share the computing power of their computers to search for extraterrestrial life) to the popular KaZaA file-sharing program (used to share music and other files). As shown in figure 4, there are two main models of peer-to-peer networks: (1) the centralized model, based on a central server or broker that directs traffic between individual registered users, and (2) the decentralized model, based on the Gnutella network, in which individuals find and interact directly with each other. As shown in figure 4, the centralized model relies on a central server/broker to maintain directories of shared files stored on the respective computers of the registered users of the peer-to-peer network. When Bob submits a request for a particular file, the server/broker creates a list of files matching the search request by checking the request with its database of files belonging to registered users currently connected to the network. The broker then displays that list to Bob, who can then select the desired file from the list and open a direct link with Alice’s computer, which currently has the file. The download of the actual file takes place directly from Alice to Bob. The broker model was used by Napster, the original peer-to-peer network, facilitating mass sharing of copyrighted material by combining the file names held by thousands of users into a searchable directory that enabled users to connect with each other and download MP3 encoded music files. The broker model made Napster vulnerable to legal challenges and eventually led to its demise in September 2002. Although Napster was litigated out of existence and its users fragmented among many alternative peer-to-peer services, most current-generation peer-to-peer networks are not dependent on the server/broker that was the central feature of the Napster service, so, according to Gartner, these networks are less vulnerable to litigation from copyright owners. In the decentralized model, no brokers keep track of users and their files. To share files using the decentralized model, Ted starts with a networked computer equipped with a Gnutella file-sharing program, such as KaZaA or BearShare. Ted connects to Carol, Carol to Bob, Bob to Alice, and so on. Once Ted’s computer has announced that it is “alive” to the various members of the peer network, it can search the contents of the shared directories of the peer network members. The search request is sent to all members of the network, starting with Carol, who will each in turn send the request to the computers to which they are connected, and so forth. If one of the computers in the peer network (say, for example, Alice’s) has a file that matches the request, it transmits the file information (name, size, type, etc.) back through all the computers in the pathway towards Ted, where a list of files matching the search request appears on Ted’s computer through the file-sharing program. Ted will then be able to open a connection with Alice and download the file directly from Alice’s computer. One of the key features of Napster and the current generation of decentralized peer-to-peer technologies is their use of a virtual name space (VNS). A VNS dynamically associates user-created names with the Internet address of whatever Internet-connected computer users happen to be using when they log on. The VNS facilitates point-to-point interaction between individuals, because it removes the need for users and their computers to know the addresses and locations of other users; the VNS can, to certain extent, preserve users’ anonymity and provide information on whether a user is or is not connected to the Internet at a given moment. The file-sharing networks that result from the use of peer-to-peer technology are both extensive and complex. Figure 5 shows a map or topology of a Gnutella network whose connections were mapped by a network visualization tool. The map, created in December 2000, shows 1,026 nodes (computers connected to more than one computer) and 3,752 edges (computers on the edge of the network connected to a single computer). This map is a snapshot showing a network in existence at a given moment; these networks change constantly as users join and depart them. Operating at bandwidths markedly greater than that provided by telephone networks. Broadband networks can carry digital videos or a massive quantity of data simultaneously. In the on-line environment, the term is often used to refer to Internet connections provided through cable or DSL (digital subscriber line) modems. A file-sharing program for Gnutella networks. BearShare supports the trading of text, images, audio, video, and software files with any other user of the network. In the peer-to-peer environment, an intermediary computer that coordinates and manages requests between client computers. Images of cartoon characters engaged in sexual activity. Internet program enabling users to communicate through short written messages. Some of the most popular chat programs are America Online’s Instant Messenger and the Microsoft Network Messenger. See instant messaging. Sexually arousing images of children that are not considered pornographic, obscene, or offensive. A networking model in which a collection of nodes (client computers) request and obtain services from a server node (server computer). A file-sharing program based on the Gnutella protocol. Gnutella enables users to directly share files with one another. Unlike Napster, Gnutella- based programs do not rely on a central server to find files. Decentralized group membership and search protocol, typically used for file sharing. Gnutella file-sharing programs build a virtual network of participating users. The standard language (HyperText Markup Language) used to display information on the Web. It uses tags embedded in text files to encode instructions for formatting and displaying the information. A popular method of Internet communication that allows for an instantaneous transmission of messages to other users who are logged into the same instant messaging service. America Online’s Instant Messenger and the Microsoft Network Messenger are among the most popular instant messaging programs (see chat). Internet chat application allowing real-time conversations to take place via software, text commands, and channels. Unlike the Web-based IM, IRC requires special software and knowledge of technical commands (see chat). Internet Protocol address. A number that uniquely identifies a computer connected to the Internet to other computers. A file-sharing program using a proprietary peer-to-peer protocol to share files among users on the network. Through a distributed self-organizing network, KaZaA requires no broker or central server like Napster. A file-sharing program running on Gnutella networks. It is open standard software running on an open protocol, free for the public to use. A file-sharing application using the KaZaA peer-to-peer protocol to share files among users on the network. A process whereby one image is gradually transformed into a second image. Moving Pictures Experts Group (MPEG) MPEG-1 Audio Layer-3. A widely used standard for compressing and transmitting music in digital format across Internet. MP3 can compress file sizes at a ratio of about 10:1 while preserving sound quality. Discussion groups on Usenet, varying in topic from technical to bizarre. There are over 80,000 newsgroups organized by major areas or domains. The major domains are alt (any conceivable topic, including pornography); biz (business products and services); rec (games and hobbies); comp (computer hardware and software); sci (sciences); humanities (art and literature); soc (culture and social issues); misc (miscellaneous, including employment and health); and talk (debates on current issues). See Usenet. A computer or a device that is connected to a network. Every node has a unique network address. A network node that may function as a client or a server. In the peer-to- peer environment, peer computers are also called servents, since they perform tasks associated with both servers and clients. A computer that interconnects client computers, providing them with services and information; a component of the client-server model. A Web server is one type of server. Search for extraterrestrial intelligence at home. A distributed computing project, SETI@home uses data collected by the Arecibo Telescope in Puerto Rico. The project takes advantage of the unused computing capacity of personal computers. As of February 2000, the project encompassed 1.6 million participants in 224 countries. The general structure—or map—of a network. It shows the computers and the links between them. A bulletin board system accessible through the Internet containing more than 80,000 newsgroups. Originally implemented in 1979, it is now probably the largest decentralized information utility in existence (see newsgroups). Having the properties of x while not being x. For example, “virtual reality” is an artificial or simulated environment that appears to be real to the casual observer. Internet addressing and naming system. In the peer-to-peer environment, VNS dynamically associates names created by users with the IP addresses assigned by their Internet services providers to their computers. A worldwide client-server system for searching and retrieving information across the Internet. Also known as WWW or the Web.
The availability of child pornography has dramatically increased in recent years as it has migrated from printed material to the World Wide Web, becoming accessible through Web sites, chat rooms, newsgroups, and now the increasingly popular peer-to-peer file-sharing programs. These programs enable direct communication between users, allowing users to access each other's files and share digital music, images, and video. GAO was requested to determine the ease of access to child pornography on peer-to-peer networks; the risk of inadvertent exposure of juvenile users of peer-to-peer networks to pornography, including child pornography; and the extent of federal law enforcement resources available for combating child pornography on peer-to-peer networks. Because child pornography cannot be accessed legally other than by law enforcement agencies, GAO worked with the Customs Cyber-Smuggling Center in performing searches: Customs downloaded and analyzed image files, and GAO performed analyses based on keywords and file names only. In commenting on a draft of this report, the Department of Justice agreed with the report's findings and provided additional information. Child pornography is easily found and downloaded from peer-to-peer networks. In one search using 12 keywords known to be associated with child pornography on the Internet, GAO identified 1,286 titles and file names, determining that 543 (about 42 percent) were associated with child pornography images. Of the remaining, 34 percent were classified as adult pornography and 24 percent as nonpornographic. In another search using three keywords, a Customs analyst downloaded 341 images, of which 149 (about 44 percent) contained child pornography. These results are in accord with increased reports of child pornography on peer-to-peer networks; since it began tracking these in 2001, the National Center for Missing and Exploited Children has seen a fourfold increase--from 156 in 2001 to 757 in 2002. Although the numbers are as yet small by comparison to those for other sources (26,759 reports of child pornography on Web sites in 2002), the increase is significant. Juvenile users of peer-to-peer networks are at significant risk of inadvertent exposure to pornography, including child pornography. Searches on innocuous keywords likely to be used by juveniles (such as names of cartoon characters or celebrities) produced a high proportion of pornographic images: in our searches, the retrieved images included adult pornography (34 percent), cartoon pornography (14 percent), child erotica (7 percent), and child pornography (1 percent). While federal law enforcement agencies--including the FBI, Justice's Child Exploitation and Obscenity Section, and Customs--are devoting resources to combating child exploitation and child pornography in general, these agencies do not track the resources dedicated to specific technologies used to access and download child pornography on the Internet. Therefore, GAO was unable to quantify the resources devoted to investigating cases on peer-to-peer networks. According to law enforcement officials, however, as tips concerning child pornography on peer-to-peer networks escalate, law enforcement resources are increasingly being focused on this area.
The NFIP provides property insurance for flood victims, maps the boundaries of the areas at highest risk of flooding, and offers incentives for communities to adopt and enforce floodplain management regulations and building standards to reduce future flood damage. The effective integration of all three of these elements is needed for the NFIP to achieve its goals. These include: providing property flood insurance coverage for the many property owners who would benefit from such coverage; reducing taxpayer-funded disaster assistance for property damage when flooding strikes; and reducing flood damage to properties through floodplain management that is based on accurate, useful flood maps and the enforcement of relevant building standards. Floods are the most common and destructive natural disaster in the United States. According to NFIP statistics, 90 percent of all natural disasters in the United States involve flooding. Our analysis of FEMA data found that over the past 25 years, about 97 percent of the U.S. population lived in a county that had at least one declared flood disaster, and 45 percent lived in a county that that had six or more flood disaster declarations. However, flooding is generally excluded from homeowner insurance policies that typically cover damage from other losses, such as wind, fire, and theft. Because of the catastrophic nature of flooding and the difficulty of adequately predicting flood risks, as well as the fact that those who are most at risk are the most likely to buy coverage, private insurance companies have largely been unwilling to underwrite and bear the risk of flood insurance. The NFIP was established by the National Flood Insurance Act of 1968 to provide policyholders with some insurance coverage for flood damage, as an alternative to disaster assistance, and to try to reduce the escalating costs of repairing flood damage. In creating the NFIP, Congress found that a flood insurance program with the “large-scale participation of the Federal Government and carried out to the maximum extent practicable by the private insurance industry is feasible and can be initiated.” In keeping with this purpose, 92 private insurance companies were participating in the WYO program as of September 2007. NFIP pays these insurers fees to sell and service policies and adjust and process claims. FEMA, which is within the Department of Homeland Security (DHS), is responsible for the oversight and management of the NFIP. We reported in September 2007 that about 68 FEMA employees, assisted by about 170 contract employees, manage and oversee the NFIP and the National Flood Insurance Fund, into which premiums are deposited and claims and expenses are paid. As of April 2007, the NFIP was estimated to have over 5.4 million policies in about 20,300 communities. To ensure that NFIP can cover claims after catastrophic events, FEMA has statutory authority to borrow funds from the Treasury to keep the program solvent. According to FEMA, an estimated $1.2 billion in flood losses are avoided annually because communities have implemented the NFIP’s floodplain management requirements. Flood maps identify the boundaries of the areas that are most at risk of flooding. Property owners whose properties are within special flood hazard areas and who have mortgages from a federally regulated lender are required to purchase flood insurance for the amount of their outstanding mortgage balance, up to the maximum policy limit of $250,000 for single-family homes. According to FEMA, Excess Flood Protection coverage above these amounts is available in the private insurance markets. Personal property coverage is available for contents, such as furniture and electronics, for an additional $100,000. Business owners may purchase up to $500,000 of coverage for buildings and $500,000 for contents. The owners of properties with no mortgages or properties with mortgages held by lenders who are not federally regulated are not required to buy flood insurance, even if the properties are in a special flood hazard area. Optional lower-cost coverage is available under the NFIP to protect homes in areas of low to moderate risk. To the extent possible, the NFIP is designed to pay operating expenses and flood insurance claims with premiums collected on flood insurance policies rather than with tax dollars. However, as we have reported, the program, by design, is not actuarially sound because Congress authorized subsidized insurance rates for policies covering some properties in order to encourage communities to join the program. As a result, the program does not collect sufficient premium income to build capital to cover long- term future flood losses. Moreover, the premiums collected are often not sufficient to pay for losses even in years without catastrophic flooding. This shortfall is exacerbated by repetitive loss properties that file repeated claims with NFIP. FEMA’s current debt to the Treasury—over $17.5 billion—is almost entirely for payment of claims from the 2005 hurricanes. Legislation increased FEMA’s borrowing authority from a total of $1.5 billion prior to Hurricane Katrina to $20.8 billion in March 2006. As we have testified previously, it is unlikely that FEMA will be able to repay a debt of this size and cover future claims, given that the program generates premium income of about $2 billion a year, which must first cover ongoing loss and expenses. To date, the program has gone through almost two full seasons without a major hurricane, and according to FEMA about $524 million of premium income has been used to pay interest on the debt owed to the Treasury in 2006. FEMA officials also noted that because fiscal year 2007 had been a relatively low flood loss year, the agency should be able to pay its next scheduled interest payment from premium income and would not have to borrow additional funds from Treasury to pay interest on its outstanding debt. Attention has been focused on the extent of the federal government’s exposure for claims payments in future catastrophic loss years and on ways to improve the program’s financial solvency. For example, some in Congress have recommended phasing in actuarial rates for vacation homes and nonresidential properties. About 25 percent of NFIP’s over 5.4 million policies have premiums that are substantially less than the true risk premiums. Properties constructed before their communities joined the NFIP and were issued a Flood Insurance Rate Map (or FIRM), which shows the community’s flood risk, are eligible for subsidized rates. These policyholders typically pay premiums that represent about 35 to 40 percent of the true risk premium. In January 2006, FEMA estimated a shortfall in annual premium income because of policy subsidies at $750 million. In response to concerns about the historical basis for the subsidies and questions about the characteristics of the homes receiving subsidies, we were asked by the Ranking Member of this committee to collect certain demographic information about the portfolio of subsidized properties and property owners. This work will provide information on residential pre-FIRM subsidized properties in selected counties of the country. To the extent that reliable data is available, we plan to capture the variations that exist by type of flooding (e.g., coastal or riverine), fair market values for subsidized and nonsubsidized properties in each location, average income levels for each county, claims data for subsidized and nonsubsidized properties in each location, and the mitigation efforts being used. Our work will build upon the work of the Congressional Budget Office on values of properties in the NFIP. As part of this review, we are also examining the extent to which FEMA’s nonsubsidized rates are truly actuarially based. We will assess how NFIP sets rates for its nonsubsidized and subsidized premiums, determine the total premiums the NFIP collects, and compare that amount to claims and related costs. Our analysis of FEMA’s premiums and claims data should help provide insights into how FEMA sets rates. We also have work under way that will provide a description of financial and statistical trends, by flood zone, for the past 10 years. Specifically, we have been asked to describe average premium and claim amounts by flood zone, FEMA’s estimates of likely losses, and the extent to which losses are attributable to repetitive loss properties or hurricanes. We will also describe the extent to which flood-damaged properties have been purchased through NFIP-funded mitigation programs. However, our ability to report on these issues will depend on the quality of FEMA’s claims data. Finally, we are evaluating the adequacy of FEMA’s procedures for monitoring selected contracts that support the NFIP. In reauthorizing the NFIP in 2004, Congress noted that repetitive loss properties—those that had resulted in two or more flood insurance claims payments of $1,000 or more over 10 years—constituted a significant drain on the resources of the NFIP. These repetitive loss properties are problematic not only because of their vulnerability to flooding, but also because of the costs of repeatedly repairing flood damages. Although these properties account for only about 1 percent of NFIP-covered properties, they account for between 25 and 30 percent of claims. As of September 2007 over 70,000 repetitive loss properties were insured by the NFIP. The 2004 Flood Insurance Reform Act authorized a 5-year pilot program to encourage mitigation efforts on severe repetitive loss properties in the NFIP. According to FEMA, as of September 2007 about 8,100 properties insured by the NFIP were categorized as severe repetitive loss properties. Under the pilot, FEMA is required to adjust its rules and rates to ensure that homeowners pay higher premiums if they refuse an offer to mitigate the property. The pilot program was funded in fiscal year 2006, and according to FEMA officials, FEMA has not yet developed the regulations, guidance, and administrative documents necessary for implementation. FEMA is also creating a new generation of properties that may not pay risk-based premiums. Properties that are remapped into higher flood risk areas may be able to keep or “grandfather” the nonsubsidized rates associated with their risk level prior to being remapped into a higher flood risk area. As a result, eligible property owners who have an existing policy or who purchase new flood insurance policies before they are mapped into higher-risk areas will go on paying the same nonsubsidized premium rate. Moreover, these grandfathered rates can be permanent. Although this option is a major selling point of encouraging broader participation in the program, such actions may further erode the actuarial soundness and financial stability of the program. From 1968 until the adoption of the Flood Disaster Protection Act of 1973, buying flood insurance was voluntary. However, voluntary participation in the NFIP was low, and many flood victims did not have insurance to repair damages from floods in the early 1970s. In 1973 and again in 1994, Congress enacted laws requiring that some property owners in special flood hazard areas buy NFIP insurance. The owners of properties with no mortgages or properties with mortgages held by lenders that were not federally regulated were not, and still are not, required to buy flood insurance, even if the properties are in special flood hazard areas. As we have reported in the past, viewpoints differ about whether lenders were complying with the flood insurance purchase requirements, primarily because the officials we spoke with did not use the same types of data to reach their conclusions. For example, federal bank regulators and lenders based their belief that lenders were generally complying with the NFIP’s purchase requirements on regulators’ examinations and reviews that were conducted to monitor and verify lender compliance. In contrast, FEMA officials believed that many lenders frequently were not complying with the requirements, an opinion that they based largely on estimates computed from data on mortgages, flood zones, and insurance policies; limited studies on compliance; and anecdotal evidence indicating that insurance was not always purchased when it was required. At the time of our report in 2002, neither side was able to substantiate these claims with statistically sound data. However, a FEMA-commissioned study of compliance with the mandatory purchase requirement estimated that compliance with purchase requirements, under plausible assumptions, was 75 to 80 percent in special flood hazard areas for single-family homes that had a high probability of having a mortgage. The analysis conducted did not provide evidence that compliance declined as mortgages aged. At the same time, the study showed that about half of single-family homes in special flood hazard areas had flood insurance. The 2006 study also found that while one-third of NFIP policies were written outside of special flood hazard areas, the market penetration rate was only about 1 percent. Yet according to FEMA about half of all flood damage occurs outside of high risk areas. FEMA has efforts under way to increase participation by improving the quality of information that is available on the NFIP and on flood risks and by marketing to retain policyholders currently in the program. In October 2003, FEMA contracted for a new integrated mass marketing campaign called “FloodSmart” to educate the public about the risks of flooding and to encourage the purchase of flood insurance. Marketing elements being used include direct mail, national television commercials, print advertising, and Web sites that are designed for communities, consumers, and insurance agents. According to FEMA officials, in the little more than 3 years since the contract began, net policy growth has been almost 24 percent, and policy retention has improved from 88 percent to almost 92 percent. However, the success of the program will be measured by retention rates as policyholders’ memories of the devastation from Hurricane Katrina begin to fade over time. Accurate flood maps that identify the areas that are at greatest risk of flooding are the foundation of the NFIP. These maps, which show the extent of flood risk across the country, allow the program to determine high-risk areas for designation both as special hazard zones and as areas that can benefit the most from mitigation. Flood maps must be periodically updated to assess and capture changes in the boundaries of floodplains resulting from community growth, development, erosion, and other factors that affect the boundaries of areas at risk of flooding. The maps are principally used by (1) the communities participating in the NFIP, to adopt and enforce the program’s minimum building standards for new construction within the maps’ identified floodplains; (2) FEMA, to develop flood insurance policy rates based on flood risk; and (3) federal regulated mortgage lenders, to identify those property owners who are statutorily required to purchase federal flood insurance. As we reported in 2004, FEMA has embarked on a multiyear effort to update the nation’s flood maps at a cost in excess of $1 billion. At that time we noted that NFIP faced major challenges in working with its contractor and state and local partners to produce accurate digital flood maps. FEMA has taken steps to improve these working relationships by developing a number of guidelines and procedures. According to FEMA, the agency has developed a plan for prioritizing and delivering modernized maps nationwide, including developing risk-based mapping priorities. Moreover, FEMA has recognized that a maintenance program will be needed to keep the maps current and relevant. For example, several strategies are under consideration for maintaining map integrity, including reviewing the flood map inventory every 5 years, as required by law; updating data and maps more regularly, as needed; addressing any unmet flood mapping needs and assessing the quality and quantity of maps; and examining risk management more broadly. However, the effectiveness of these strategies will depend on available funding and FEMA’s ongoing commitment to ensuring the integrity of the maps. As of September 2007 FEMA had remapped 34 percent of its maps. To meet its monitoring and oversight responsibilities, FEMA is required to conduct periodic operational reviews of the private insurance companies that participate in the WYO program. In addition, FEMA’s program contractor is required to check the accuracy of claims settlements by doing quality assurance reinspections of a sample of claims adjustments for every flood event. For operational reviews, FEMA examiners must thoroughly examine the companies’ NFIP underwriting and claims settlement processes and internal controls, including checking a sample of claims and underwriting files to determine, for example, whether a violation of procedures has occurred, an incorrect payment has been made, or a file does not contain all required documentation. Separately, FEMA’s program contractor is responsible for conducting quality assurance reinspections of a sample of claims adjustments for specific flood events in order to identify, among other things, expenses that were paid that were not covered and covered expenses that were not paid. In our December 2006 report, we found that a new claims handling process aided the claims handling following the 2005 hurricane season and resulted in few complaints. As a result, 95 percent of claims were closed by May 2006, a time frame that compared favorably with those of other, smaller recent floods. However, we noted that FEMA had not implemented a recommendation from a prior report that it do quality reinspections based on a random sample of all claims. We also found that FEMA had not analyzed the overall results of the quality reinspections following the 2005 hurricane season. In response, FEMA has agreed to (1) analyze the overall results of the reinspection reports on the accuracy of claims adjustments for future events, and (2) plan its reinspections based on a random sample of claims. FEMA faces challenges in providing effective oversight of the insurance companies and thousands of insurance agents and claims adjusters that are primarily responsible for the day-to-day process of selling and servicing flood insurance policies. For example, as we reported in September 2007, 94 WYO insurance companies had written 96 percent of the flood insurance policies for the NFIP as of December 2006, up from the 48 companies that were writing 50 percent of the policies in 1986. We also reported that for fiscal years 2004 through 2006, total operating costs that FEMA paid to the WYO insurance companies ranged from $619 million to $1.6 billion, or from more than a third to almost two-thirds of the total premiums paid by policyholders to the NFIP, as a result of unprecedented flood losses caused by the 2005 hurricanes. FEMA regulations require each participating company to arrange and pay for audits by independent certified public accounting firms. However, many WYO insurance companies have not complied with the schedule in recent years. For example, for fiscal years 2005 and 2006, 5 of 94 participating companies had biennial financial statement audits performed. In response to our recommendations, FEMA has agreed to take steps to ensure that it has reasonable estimates of the actual expenses that WYO insurance companies incurred to help determine whether payments for services are appropriate and that required financial audits are performed. Building on this body of work, we are beginning a follow-up engagement that will analyze the expenses WYO insurance companies incur from selling and servicing NFIP policies and determine whether the total operating costs paid to the companies are equitable relative to those costs. We will also examine how FEMA oversees the WYO program, including reinspecting claims and performing operational reviews. Finally, we will evaluate alternatives for selling and servicing flood insurance policies and processing claims. We are also completing an engagement that looks at the inherent conflict of interest that exists when a WYO insurance company sells both property- casualty and flood policies to a single homeowner who is subject to a multiple peril event such as a hurricane. We testified before the House Committees on Financial Services and Homeland Security in June 2007 about our preliminary views on the sufficiency of data available to and collected by FEMA to ensure the accuracy of claims payments. FEMA has determined that it does not have the authority to collect wind damage claims data from WYO insurance companies, even when the insurer services both the wind and flood policies on the same property. Hence, FEMA generally does not know the extent to which wind may have contributed to total property damages. However, FEMA officials do not believe that the agency needs to know the dollar amount of wind damages paid by a WYO insurance company to verify the accuracy of a flood claim. While they may not need this information for many flood claims, the inherent conflict of interest that exists when a single WYO insurance company is responsible for adjusting both the wind and flood claim on a single property calls for the institution of strong internal controls to ensure the accuracy of FEMA’s claims payments. Without internal controls that include access to the entire claim file for certain properties (both wind and flood), FEMA’s ability to confirm the accuracy of certain flood claims may be limited. While the DHS Inspector General is currently examining this issue by reviewing both wind and flood claims on selected properties. Its interim report, issued in July 2007, was generally inconclusive. As our prior work reveals, FEMA faces a number of ongoing challenges in managing the NFIP that, if not addressed, will continue to threaten the program’s financial solvency even if the program’s current debt is forgiven. As we noted when we placed the NFIP on the high-risk list in 2006, comprehensive reform will likely be needed to stabilize the long-term finances of this program. Our ongoing work is designed to provide FEMA and Congress with useful information to help assess ways to improve the sufficiency of NFIP’s financial resources and its current funding mechanism, mitigate expenses from repetitive loss properties, increase compliance with mandatory purchase requirements, and expedite FEMA’s flood map modernization efforts. As you well know, placing the program on more sound financial footing involves a set of highly complex, interrelated issues that are likely to involve many trade-offs. For example, increasing premiums to better reflect risk would put the program on a sounder financial footing but could also reduce voluntary participation in the program or encourage those who are required to purchase flood insurance to limit their coverage to the minimum required amount (i.e., the amount of their outstanding mortgage balance). As a result, taxpayer exposure for disaster assistance resulting from flooding could increase. As we have said before, meeting the NFIP’s current challenges will require sound data and analysis and the cooperation and participation of many stakeholders. Mr. Chairman and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions you and the Committee Members may have. Contact point for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. For further information about this testimony, please contact Orice M. Williams at (202) 512-8678 or williamso@gao.gov. This statement was prepared under the direction of Andy Finkel. Key contributors were Emily Chalmers, Martha Chow, Nima Patel Edwards, Grace Haskins, Lisa Moore, and Roberto Pinero. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The National Flood Insurance Program (NFIP), established in 1968, provides property owners with some insurance coverage for flood damage. The Federal Emergency Management Agency (FEMA) within the Department of Homeland Security is responsible for managing the NFIP. Given the challenges facing the NFIP and the need for legislative reform to ensure the financial stability and ongoing viability of this program, GAO placed the NFIP on its high-risk list in March 2006. This testimony updates past work and provides information about ongoing GAO work on issues including (1) NFIP's financial structure, (2) the extent of compliance with mandatory requirements, (3) the status of map modernization efforts, and (4) FEMA's oversight of the NFIP. Building on our previous and ongoing work on the NFIP, GAO collected data from FEMA to update efforts, including information about claims, policies, repetitive loss properties, and mitigation efforts. The most significant challenge facing the NFIP is the actuarial soundness of the program. As of August 2007, FEMA owed over $17.5 billion to the U.S. Treasury. FEMA is unlikely to be able to pay this debt, primarily because the program's premium rates have been set to cover an average loss year, which until 2005 did not include any catastrophic losses. This challenge is compounded by the fact that some policyholders with structures that were built before floodplain management regulations were established in their communities generally pay premiums that represent about 35 to 40 percent of the true risk premium. Moreover, about 1 percent of NFIP-insured properties that suffer repetitive losses account for between 25 and 30 percent of all flood claims. FEMA is also creating a new generation of "grandfathered" properties--properties that are mapped into higher-risk areas but may be eligible to receive a discounted premium rate equal to the nonsubsidized rate for their old risk designation. Placing the program on a more sound financial footing will involve trade-offs, such as charging more risk-based premiums and expanding participation in the program. The NFIP also faces challenges expanding its policyholder base by enforcing compliance with mandatory purchase requirements and promoting voluntary purchase by homeowners who live in areas that are at less risk. One recent study estimated that compliance with the mandatory purchase requirement was about 75 to 80 percent but that penetration elsewhere in the market was only 1 percent. Since 2004, FEMA has implemented a massive media campaign called "FloodSmart" to increase awareness of flooding risk nationwide by educating everyone about the risks of flooding and encouraging the purchase of flood insurance. While the numbers of policyholders increased following Hurricane Katrina, it is unclear whether these participants will remain in the program as time goes on. The impact of the 2005 hurricanes highlighted the importance of up-to-date flood maps that accurately identify areas at greatest risk of flooding. These maps are the foundation of the NFIP. In 2004 FEMA began its map modernization efforts, and according to FEMA, about 34 percent of maps have been remapped. Completing the map modernization effort and keeping these maps current is also going to be an ongoing challenge for FEMA. Finally, FEMA also faces significant challenges in providing effective oversight over the insurance companies and thousands of insurance agents and claims adjusters who are primarily responsible for the day-to-day process of selling and servicing flood insurance policies. As GAO recommended in a an interim report issued in September 2007, FEMA needs to take steps to ensure that it has a reasonable estimate of actual expenses that the insurance companies incur to help determine whether payments for services are appropriate and that required financial audits are performed. GAO, in its ongoing work, plans to further explore FEMA oversight of the private insurance companies and the cost of selling and servicing NFIP flood policies.
FFRDCs are private sector organizations funded primarily by federal agencies to meet a special long-term research and development need that cannot be met as effectively by existing in-house or contractor resources. One federal agency serves as the primary sponsor of the FFRDC and signs an agreement specifying the purpose, terms, and other provisions for the FFRDC’s existence. Agreement terms cannot exceed 5 years but can be extended after a review by the sponsor of the continued use and need for the FFRDC. Federal regulations state that an FFRDC is required to conduct its business in a manner befitting its special relationship with the government, operate in the public interest with objectivity and independence, be free from organizational conflicts of interest, and have full disclosure of its affairs to the sponsoring agency. The Aerospace Corporation is a private, nonprofit mutual benefit corporation created in 1960. Aerospace’s primary purpose is to provide scientific and engineering support for the U.S. military space program. Aerospace operates an FFRDC in support of U.S. national security space programs pursuant to the Federal Acquisition Regulation (FAR), and its primary sponsor is the Air Force. Aerospace is governed by a 16-member Board of Trustees in accordance with its articles of incorporation and bylaws. The Air Force Space and Missile Systems Center (SMC), a part of the Air Force Materiel Command, has day-to-day management responsibility over the FFRDC. Through fiscal year 1994, SMC negotiated annual cost-plus-fixed-fee contracts with Aerospace. In fiscal year 1995, it began operating under a cost-plus-award-fee contract where the amount of fee is based on Aerospace’s performance. Table 1 shows the contract costs and fees awarded to Aerospace between fiscal years 1989 and 1994. Although SMC is the primary customer, Aerospace also performs work for other U.S. government agencies, international organizations, and foreign governments. In fiscal year 1993, for example, Aerospace’s reported revenues totaled $422.2 million, of which 97.4 percent came from the Air Force and other DOD agencies; 2.2 percent from other federal agencies, such as the National Aeronautics and Space Administration; and 0.4 percent from nonfederal government sources, such as universities and foreign governments. The Office of Federal Procurement Policy Letter 84-1, dated April 1984, established governmentwide policies for the establishment, use, review, and termination of the sponsorship of FFRDCs. It provides that the conditions affecting the negotiation of fee should be identified in the contract, sponsoring agreement with the FFRDC, or the sponsoring agency’s policies and procedures, as appropriate. The FAR also requires that the sponsoring agreement or the sponsoring agency’s policies and procedures identify the considerations that will affect the negotiation of fee when fee is determined to be appropriate. The Defense Acquisition Regulation Supplement (DFARS) provides more specific guidance for determining whether a fee is appropriate and how the fee is to be determined. Since FFRDCs may incur expenses that are unreimbursable under federal regulations, the DFARS allows for a fee to cover unreimbursed expenses if they are deemed ordinary and necessary to the FFRDC. An SMC contracting office instruction provides the fee determination procedures to be used on the Aerospace contract. It states that the fee is to be based on a need that must be justified and that the fee must be used for the purposes awarded. In fiscal year 1993, Aerospace reported that it used about $11.5 million of the $15.5 million Air Force contract fee to sponsor research. It used the remainder of the fee, along with other corporate resources, for capital equipment purchases, real and leasehold property improvements, and other unreimbursed expenditures. According to Aerospace officials, the fee from Air Force contracts is combined with funds from other sources in Aerospace’s accounting records. Therefore, it is not possible to link each specific use of Aerospace’s funds to the specific funding source. However, Aerospace and SMC have a general understanding that sponsored research is to be paid from the Air Force contract fee. Also, the accounting standards and principles governing Aerospace do not require it to match funding use with funding source. Table 2 shows Aerospace’s actual sources and applications of funds in fiscal year 1993. Aerospace used the Air Force fee primarily to sponsor research with broader and longer term goals than the more immediate, direct goals of individual Air Force program offices. In fiscal year 1993, Aerospace spent about $11.5 million, or 74 percent, of the Air Force fee for sponsored research. According to Aerospace officials, these funds were used for research in such areas as electronic device technology, surveillance, and information sciences. They added that sponsored research has resulted in cost savings for the Air Force space program. For example, Aerospace attributed to such research a 50-percent increase in the life expectancy of satellite sensors in the Defense Meteorological Satellite Program. It credited such research with developing remedial procedures to extend the life of satellite batteries, which have historically contributed to limiting the life of the satellite. Aerospace also cited many other research benefits, such as combining missions on a single spacecraft system and using commercial parts and techniques. One long-standing FFRDC issue has been whether to fund sponsored research as a cost-reimbursable item or out of fee. A 1962 report to the President on government contracting for research and development, known as the Bell report, supported the continuation of fee payments for research because most nonprofit organizations must conduct some independent, self-initiated research if they are to attract and retain staff.On the other hand, an August 1965 congressional report on Aerospace noted that some research would normally be a reimbursable expense and therefore all of the research could be provided under reimbursement.Similarly, in December 1994, the DOD Inspector General concluded that FFRDC-sponsored research should be reimbursed as contract costs to the extent that is allowable and reasonable. Most recently, in May 1995, a DOD study, completed at the direction of the Congress, focused on ways to limit the use of fee. It recommended that all allowable and allocable costs, including research, be considered as reimbursable costs rather than paid from fee. Although Aerospace believes that either funding approach is correct, it believes that research is best funded using fee rather than being reimbursed as a cost item. It said that making research a cost-reimbursable item would decrease the responsibility of Aerospace management and the Board of Trustees over independent research and increase administrative burden. Also, Aerospace expressed concern that Air Force program managers may not want to fund certain research because these managers may have more immediate goals than those for Aerospace’s research program. Air Force officials said they acknowledged Aerospace’s expertise and plan to use it to the maximum extent possible regardless of the funding mechanism. Reimbursing research as a cost item would not necessarily reduce total Air Force contract costs, according to DOD. However, it would subject all research to the FAR cost principles applicable to cost-reimbursable items. Regardless of how Aerospace’s research program is funded, Air Force and Aerospace officials acknowledged that the program’s effectiveness in meeting Air Force needs could be improved. Air Force officials said that the benefits from research could be increased by strengthening Air Force and Aerospace coordination on project selection. According to Aerospace, the effectiveness of the program will be improved as a result of recent steps taken to improve the research selection process. These include (1) a formal collection of prioritized Air Force Technology Need Statements, (2) Air Force participation on Aerospace’s Technical Program Committee, and (3) a formal briefing by Aerospace to the Air Force demonstrating the relationship between selected research projects and the Air Force’s prioritized technology needs. In fiscal year 1993, Aerospace spent $18.1 million of its working capital funds for capital equipment ($14.3 million) and for real and leased property improvements ($3.8 million). Aerospace officials said these expenditures were funded from reimbursements for depreciation and amortization ($14.5 million) and the Air Force contract fee ($3.6 million). The FAR allows as reimbursable costs depreciation of capital equipment and amortization of real property and leasehold improvements. SMC defines capital equipment as an asset that has an estimated useful life of over 2 years and a cost of $1,500 or more. It includes those items that Aerospace generally uses to support Air Force contracts but are not purchased in direct support of an individual project, such as computer hardware and bundled software and laboratory diagnostic and test tools. Capital equipment used in direct support of an individual Air Force project is charged as other direct costs in the year acquired rather than depreciated. DOD’s May 1995 report to the Congress recommended requiring FFRDCs to submit an annual 5-year capital acquisition plan. According to Aerospace, such a plan may be impracticable due to rapid changes in personnel, technology, and equipment. Real and leasehold property improvements include building rehabilitation projects, building equipment replacement, security and safety requirements, new operational requirements, and seismic upgrades to meet earthquake protection standards. On the Aerospace contract, leasehold amortization has been a reimbursable cost, whereas building depreciation has been funded primarily through the Air Force contract fee and other corporate funds. In fiscal year 1993, Aerospace spent $1.9 million from fee and other corporate funds on unreimbursed costs, that it considered ordinary and necessary to the FFRDC. Some of these expenses were for contributions, travel in excess of per diem, spouse and guest meals, personal use of company-furnished automobiles, and advertising. Table 3 summarizes Aerospace’s unreimbursed expenditures in fiscal year 1993. According to Aerospace officials, new business expenses are incurred to broaden Aerospace’s involvement in non-DOD business to provide employment and operational stability for Aerospace during periods of declining DOD budgets. The officials said more non-DOD business was needed because it has been impossible for Aerospace to maintain employment stability in an environment of budget ceilings and reduced DOD funding. Further, they said that broadening the corporation’s non-DOD business base helps slow attrition and retain the skills and capabilities needed to support the Air Force’s space mission. Aerospace noted that employment has declined by 27 percent since 1990. Aerospace believes that inadequate staffing levels could increase the risk of an expensive program failure, which could lead to a serious degradation of national security readiness. Aerospace also said a broader business base also reduces the overhead costs allocated to Air Force contracts. According to Aerospace, the precedent for new business development was set in the late 1960s. At the time, DOD encouraged FFRDCs to make their services available to other government agencies so that they would transfer their technical expertise to the civilian sector. SMC officials recognize the benefits of new business development expenses in retaining Aerospace’s core capabilities and reducing overhead costs. As a result, the officials said they negotiated reasonable and cost-effective limits on new business expenses in the contracts with Aerospace. For example, they agreed to provide $400,000 for cost-reimbursable, new business expenses in fiscal year 1993. The officials said they made clear to Aerospace that any new business expenses in excess of the contract limit were not reimbursable and could not be charged to the Air Force contract fee. However, such restrictions were not expressly incorporated into Aerospace’s contract. In addition to the $400,000, Aerospace spent $551,500 on new business expenses. Aerospace officials said that $521,500 came from corporate funds other than the Air Force contract fee and $30,000 was charged directly to other contracts. Table 4 shows the new business expenses incurred by Aerospace during fiscal years 1990 through 1994. For fiscal year 1995, Aerospace proposed $2.5 million in cost-reimbursable new business expenses and $400,000 for bid and proposal expenses. Aerospace officials said that allocating about 1 percent of its contracts’ value for new business was not unreasonable given the continued reduction in budget ceilings; the government’s commitment in the sponsoring agreement to a special, long-term relationship; and the avoidance of costs associated with potential reductions in force. SMC officials said they negotiated into the contract a cost-reimbursable amount of $1.2 million for both new business expenses and bid and proposal expenses, which they believed was an appropriate amount for the anticipated benefits to the Air Force. Air Force Headquarters officials indicate that they intend to tightly control all non-FFRDC/non-DOD business activities of Aerospace. Aerospace officials said contributions help in hiring quality employees, advancing affirmative action goals, and maintaining favorable relationships within the community. Major cash contributions were broadly categorized as either “community affairs participation” or “gift matching program,” and Aerospace spent $307,000 and $255,000 for these categories, respectively, in fiscal year 1993. Under the FAR, contributions generally are not reimbursable costs. Accordingly, Aerospace’s contributions were not reimbursed as cost items but were funded from its corporate funds, which included the Air Force fee. Aerospace officials said contributions were ordinary and necessary business expenditures that were fully disclosed to the Air Force. As a result of restrictions on charitable contributions contained in the fiscal year 1995 National Defense Authorization Act, Aerospace and SMC agreed that Aerospace would not make any further charitable contributions from funds obtained from DOD. This agreement was incorporated in the fiscal year 1995 contract. Miscellaneous expenditures from corporate funds totaled $308,000 for fiscal year 1993 and included $58,700 for the personal use of company cars, $143,100 for conference meals and trustee expenses, and $106,200 for other expenses. Aerospace corporate officers were provided company cars. The FAR states that the costs of automobiles owned or leased by the contractor are allowable if they are reasonable and the cars are used for company business. Costs relating to the personal use of vehicles by employees (including transportation to and from work) are unallowable. According to Aerospace, the $58,700 charged to corporate funds for the personal use of company cars was primarily for transportation to and from work and was reported as taxable employee income. Unreimbursed conference meals and trustee expenses of $143,100 in fiscal year 1993 included unallowable costs, such as meals for spouses and guests, that were incurred at trustee and other meetings. For example, Aerospace included unreimbursed costs of over $4,000 for 36 spouses and guests at the Collier Award banquet for the Air Force/industry team that developed the Global Positioning System, of which Aerospace was a key member. Aerospace said these unreimbursed expenses of $143,100 were ordinary and necessary. Similar expenditures also were incurred in fiscal year 1994, including bar charges of $1,764 for 63 people, at a dinner reception during a trustee meeting in March 1994. Aerospace also incurred $106,200 for other miscellaneous expenditures in fiscal year 1993 that included advertising, employee recreation activities, and donations of capital equipment. Aerospace said travel expenditures in excess of per diem rates included $25,000 for airline coupons used to provide business and first-class upgrades for its corporate officers. Under the FAR, airfare costs in excess of the lowest customary standard coach or equivalent airfare offered during normal business hours are generally unallowable for cost reimbursement. Accordingly, Aerospace did not submit the costs of upgrades as cost-reimbursable items, although it obtained SMC approval in 1992 to upgrade to business-class air accommodations for corporate officers on trips longer than 2 hours. SMC accepted Aerospace’s justification that these upgrades would enhance officers’ productivity. SMC officials said they might need to reevaluate whether airline upgrades should be cost-reimbursable items due to DOD’s study to limit fee and new federal guidelines on travel costs. Interest expense at Aerospace amounted to $22,000 in fiscal year 1993. Although neither the FAR nor DFARS specifically defines what are ordinary and necessary expenses for FFRDCs, the contract operating instruction at SMC cites interest expense as an example of an unallowable but ordinary and necessary cost of FFRDC operations. For purposes of this report, sundry includes $422,000 in costs for (1) certain executive salary and benefits, (2) relocation and special recruiting expenses, (3) achievement awards, (4) educational assignments, and (5) bids and proposals. According to Aerospace, some of these costs are allowable for cost reimbursement under the FAR. However, Aerospace said it paid the costs out of corporate funds to avoid potential controversies with the Air Force or the Defense Contract Audit Agency regarding the costs’ allowability. For example, through fiscal year 1993, Aerospace charged to corporate funds the portion of the president/chief executive officer’s salary that exceeded the salary for Executive Schedule Level II. Aerospace said it charged the president/chief executive officer’s entire salary as a cost-reimbursable item in fiscal year 1994. Existing federal regulations provide general guidance regarding how fee is to be determined, but do not restrict how a fee may be used nor define what are ordinary and necessary expenses. Further, neither the Air Force sponsoring agreement with Aerospace nor the annual contract specify how a fee may be used. Although the Air Force and Aerospace discuss Aerospace’s need for fee and planned use of fee by cost category, Aerospace exercises some discretion in spending the fee and determining what expenditures funded from fee are ordinary and necessary. Since Aerospace’s fee is based on its need, the manner in which Aerospace uses its corporate resources, including fee, in any one year may affect its need for an Air Force fee in the following year. Aerospace stated that even though it has discretion regarding the use of all corporate resources, including Air Force fee, it attempts to use the resources in a manner that is consistent with the plan presented to the Air Force. Aerospace officials told us they recognize that if Aerospace used its resources in a manner that was inconsistent with the plan discussed with the Air Force, the Air Force might attempt to negotiate a reduced fee in subsequent years. In this regard, Air Force officials told us Aerospace has an inherent responsibility to spend its fee in accordance with the justification of its need, even though it is not specifically required by the contract to do so. In establishing the fee provided to defense FFRDCs, the DFARS says that consideration should be given to funding unreimbursed costs deemed ordinary and necessary to the FFRDC. DOD’s May 1995 report on FFRDC fee management recognized that the guidance in the FAR and DFARS concerning the granting of FFRDC fees is not clear about what unreimbursed costs are considered ordinary and necessary to FFRDC operations. The report recommended that new guidance be developed and that the use of the undefined and ambiguous term “ordinary and necessary” be avoided. The report also recognized the need for specific examples of appropriate fee use. Implementing this recommendation should provide the Air Force with a better basis for negotiating fee award. An agreed-upon definition of ordinary and necessary expenses would assist contracting officers in resolving issues with other defense FFRDCs. However, as long as moneys provided through Air Force fee are commingled with other funding sources, the Air Force may have difficulty determining how Aerospace used its FFRDC fee. In commenting on a draft of this report, DOD stated that it did not dispute the facts contained in the report and indicated that the report would be helpful in the ongoing DOD efforts to strengthen FFRDC oversight and use of management fees. However, DOD said that none of the data in the report represented improper or illegal activity, as defined by existing statute or regulation, on the part of DOD or Aerospace. DOD further commented that it was taking positive steps to improve its FFRDC fee management process. For example, it said that in the fiscal year 1996 Aerospace contract, the Air Force would address specific uses of fee, such as personal use of cars and travel-related items, through contract provisions or by disallowing the expense. Further, DOD said it was actively working to improve the fee management process based on the findings and recommendations made in DOD’s May 1995 report on fee management, as well as work done by us and the DOD Inspector General. DOD’s comments are included in their entirety in appendix I. Aerospace provided specific language clarifications. These changes were incorporated where appropriate. We examined Aerospace’s proposed fee expenditures and the Air Force’s and Defense Contract Audit Agency’s evaluations of Aerospace’s proposals, including audit reports, supporting workpapers, technical evaluations, and Air Force’s price negotiation memorandums. We also examined documentation supporting the nature and purpose of selected actual fee expenditures. Further, we obtained the views of Aerospace’s officials and cognizant Defense Contract Audit Agency and Air Force program and contracting officials at Aerospace on factors affecting the use of fee. To determine the regulatory requirements governing the determination and use of fee, we reviewed applicable Office of Federal Procurement Policy guidance; FAR and DFARS provisions; Air Force operating instructions and procedures; and Air Force correspondence, contracts, and sponsoring agreement with Aerospace. We reviewed Aerospace’s use of fee for fiscal year 1993 because, at the time we began our work, it was the most recently completed year for which Aerospace had submitted its schedule of unreimbursed expenditures. We also exchanged information with DOD staff involved in the congressionally mandated DOD study on FFRDC fees during their study of the current fee determination process and fee management issues. We conducted our work from October 1994 to July 1995 in accordance with generally accepted government auditing standards. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies to the Secretary of Defense; the Director, Office of Management and Budget; the Administrator, Office of Federal Procurement Policy; and other interested congressional committees. Copies will also be available to others upon request. Please contact me at (202) 512-4587 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix II. Odi Cuero Benjamin H. Mannen Amborse A. McGraw The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed how the Aerospace Corporation used a $15.5-million contract fee provided by the Air Force in fiscal year 1993 to operate a federally funded research and development center (FFRDC), focusing on the regulatory requirements governing the determination and use of this fee. GAO found that: (1) Aerospace spent $11.5 million, or 74 percent, of its $15.5-million fee for research; (2) Aerospace spent the remaining $4 million for capital equipment purchases, real and leasehold property improvements, and unreimbursed expenses; (3) even though the Air Force and Aerospace discuss Aerospace's specific fee needs and intended use as a basis for fee award, the contract contains the total fee amount; (4) once the Air Force awards the fee, Aerospace exercises some discretion over how to spend it and other sources of corporate funds, such as interest income and fee from other contracts; (5) the manner in which Aerospace spends its corporate funds in a given year can affect how much Air Force fee is needed in the following year; (6) in May 1995, the Department of Defense (DOD) issued a report to the Congress on fee management at defense FFRDCs; (7) the report focused on ways to limit the use of fee and recommended, among other things, that: (a) defense FFRDC fee amounts be based on the contracting officer's determination of fee need and not on the application of weighted guidelines; (b) all allowable and allocable costs be moved from fee to the cost reimbursement portion of the contract; and (c) guidance be developed regarding what costs are to be considered ordinary and necessary to the operation of an FFRDC; and (8) DOD has indicated that it is working to improve the fee management process based on these recommendations, as well as the most recent GAO and DOD Inspector General work on this issue.
IRS has two major programs to collect tax debts. First, IRS staff in the telephone function may attempt collection over the phone or in writing. Second, if more in-depth collection action is required, field collection staff may visit delinquent taxpayers at their homes or businesses as well as contact them by telephone and mail. Under certain circumstances, IRS staff can initiate enforced collection action, such as recording liens on taxpayer property and sending notices to levy taxpayer wages, bank accounts, and other financial assets held by third parties. Field collection staff also can be authorized to seize other taxpayer assets to satisfy the tax debt. However, as we have previously reported, IRS has deferred collection action on billions of dollars of delinquent tax debt and, until recently, IRS collection program performance indicators have declined, in part because of higher workload in other priority areas and unbudgeted cost increases (such as for rent or pay). Although IRS data indicate that trends in collections have shown some improvements, the enforcement of the tax laws—including the collection of unpaid taxes—remains one of GAO’s “high-risk” areas of government. To help address the growing tax debt inventory and declines in IRS’s tax collection efforts, the Department of the Treasury proposed that Congress authorize IRS to use PCAs to help collect tax debts for simpler types of cases, paying them out of a revolving fund of tax revenues that they collect. IRS officials said that this proposal arose, in part, because of the belief that Congress was not likely to provide the increased budget to hire enough IRS staff to work the inventory of collection cases. In 2004, Congress authorized IRS to use PCAs to take certain defined steps to collect tax debts—including locating taxpayers, requesting full payment of the tax debt or offering taxpayers installment agreements if full payment cannot be made, and obtaining financial information from taxpayers. PCAs are to have limited authorities and are not to adjust the amount of tax debts or to use enforcement powers to collect the debts, which are inherently governmental functions that are to be performed by IRS employees. IRS is authorized to pay PCAs up to 25 percent of the amount of tax debts collected and retain another 25 percent of taxes collected to fund IRS collection enforcement activities. IRS initially envisions using PCAs on simpler cases that have no need for IRS enforcement action and that involve individual taxpayers that (1) filed tax returns showing taxes due but did not pay all those taxes and (2) made three or more voluntary payments to satisfy an additional tax assessed by IRS but have stopped the payments. To start, IRS plans to send cases to PCAs that have not recently been worked by IRS because of their lower priority, such as cases set aside because of inadequate IRS resources to work them or those in the queue to be worked but not yet assigned to IRS staff. After gaining some experience, IRS plans to expand the types of cases to be sent to PCAs to include those unassigned cases that IRS staff now may work, including those in which IRS attempts to find taxpayers that appeared to not file required tax returns, according to IRS officials. IRS first attempted to contract collections with a pilot test in 1996 but abandoned the effort, in part, because the $3.1 million collected fell below the $4.1 million in direct costs plus the $17 million in lost revenues from using IRS staff to work on the pilot test rather than collect taxes. Also, limitations in IRS’s computer systems and ability to transfer data hampered efforts to send appropriate cases to PCAs. The current PDC program differs from the 1996 pilot because IRS will require PCAs to try to resolve collection cases within guidelines rather than just remind taxpayers of their debt, will pay PCAs a percentage of dollars they collect rather than a fixed fee, and will electronically send and protect taxpayer data rather than send the cases manually. Appendix III provides some data and information about the PDC program in terms of costs, projected tax revenue to be collected, staffing, and cases to be sent to PCAs. Our 2004 report identified and validated five critical success factors for contracting with PCAs to collect tax debt. Table 1 describes the critical success factors and their related subfactors. To identify the critical success factors, we reviewed reports on contracting and interviewed parties with experience in contracting for debt collection, such as officials from 11 states, the Department of the Treasury’s Financial Management Service, the Department of Education, and three PCA firms that IRS selected as subject matter experts for the program. To corroborate the factors, we interviewed officials from IRS who were developing the PDC program, the IRS Office of Taxpayer Advocate, and the National Treasury Employees Union, which represents IRS employees. As a validation tool, we asked for comments on our draft list of factors from those whom we consulted to identify the factors as well as from officials at four additional PCA firms. We made changes based on their comments where appropriate. After receiving authority to use PCAs in 2004, IRS had planned to issue task orders to three PCAs in January 2006 as part of a limited implementation phase running through December 2007. However, IRS was delayed by a lawsuit and bid protest filed by certain PCAs to challenge IRS’s request for and evaluation of bids from PCAs. Specifically: IRS issued a Request for Quotations (RFQ) to solicit debt collection services for the PDC program on April 25, 2005, under which IRS would start sending cases to three PCAs in January 2006. Because of a lawsuit filed in June 2005, IRS revised and reissued the RFQ on October 14, 2005, with plans to send cases to the PCAs in July 2006. IRS selected the three PCAs on March 9, 2006. Because one of the PCAs that were not selected filed a bid protest later in March 2006, IRS stopped working with the three selected PCAs and pushed back the date to send cases to those PCAs to the August-September 2006 time frame. IRS prevailed in the bid protest in a decision issued on June 14, 2006, allowing it to resume its work with the selected PCAs. IRS sent cases to the three PCAs on September 7, 2006. In addition to contracting with PCAs, the PDC program includes IRS’s acquisition and deployment of an information system for automating case selection and managing the case workload. IRS plans to eventually also use this system to select and manage the caseloads for its telephone and field collection functions. IRS had originally planned to deploy the system with two limited-functionality subreleases concurrent with the limited implementation phase (in which IRS is contracting with three PCAs through December 2007) and begin ramping up the number of contractors (eventually to up to 12) with the third, fully functional information system subrelease in January 2008. However, IRS officials said that information systems budget constraints require IRS to change its information system plan. Although IRS has not yet finalized decisions on ramping up the number of PCAs and implementing the information system, the proposed plan IRS officials are considering is to begin increasing the number PCAs and deploy an interim subrelease with some enhancements in January 2008, but delay the full-function subrelease indefinitely. As shown in table 2, in preparation for turning over collection cases to PCAs, as of September 15, 2006, IRS has made major progress in addressing the 5 critical success factors and 17 related subfactors for contracting for tax debt collection, but nevertheless has more to do. IRS has completed steps to address 14 of the 17 subfactors. Although IRS has taken steps on the remaining 3 subfactors, IRS still has work to do to complete addressing them. For example, IRS had not yet documented all of its specific goals and related measures to orient and evaluate the PDC program in terms of achieving desired results, such as goals and measures for improving the productivity of IRS staff. Also, IRS had not determined all historical program costs, that is, how much IRS has invested to date to develop and implement the PDC program. Finishing work to address the critical success factors could help achieve desired results—such as collecting tax debts—but cannot guarantee success, which depends, in part, on how well IRS addresses the factors, identifies problems, and resolves problems in the limited implementation phase. Although IRS officials indicated that a purpose of the limited implementation phase is to assure readiness for full implementation, IRS has not yet documented how it will identify and use the lessons learned to ensure that each critical success factor is adequately addressed before expanding the program. Because program success will be affected by how well IRS identifies and makes needed adjustments to resolve problems, tracking the lessons learned in the limited implementation phase is critical. According to IRS officials, during the limited implementation phase, they plan to collect information to provide baselines, trends, and a basis for making any necessary changes. However, officials did not have specifics on how IRS would ensure all factors had been adequately addressed before moving to full implementation in January 2008. Also, IRS has not documented criteria that it will use to determine whether limited implementation phase performance was sufficient to warrant program expansion. IRS officials indicated that they plan to further discuss performance criteria that could trigger a go/no go decision, and might consider criteria such as the amount of taxes collected and indications of PCAs abusing taxpayers or misusing taxpayer data. IRS has not decided on whether these targets will include the amounts of collected taxes compared to program costs, which was a key reason for canceling the 1996 PCA pilot program. Finally, IRS will have a little more than a half year to identify the lessons learned before incorporating them into the solicitation for the next contract, which IRS intends to release in March 2007 in order to begin expanding the number of PCAs in January 2008. IRS has begun work to design a study intended to respond to a recommendation in our May 2004 report. IRS plans to compare the net dollars collected through the PDC program (dollars paid by taxpayers less fees paid to PCAs) to the dollars IRS could expect to collect if it invested its PDC-related operating costs into having IRS staff work the “next best” cases under IRS’s collection system. IRS is planning to define the cases it considers to be “next best;” gather data on PCA cases for 6-12 months; and do two iterations of the study, one in September 2006 and one in March 2007. In the documented study design, IRS would exclude the fees paid to PCAs from the costs and subtract those fees from the tax debts collected by PCAs. While such a study might produce useful information, it will not meet the intent of our recommendation. The study would not compare the results of using PCAs with the results IRS could get if given the same amount of resources, including the fees to be paid to PCAs (which are to be paid from federal tax receipts), to use in whatever fashion that officials determine would best meet tax collection goals. Appendix I includes more information on the status of IRS’s implementation of the PDC program. As discussed in more detail below, we are recommending that IRS complete establishing for the PDC program results-oriented goals and measures; information on costs; plans for evaluations; and criteria and process for assessing the critical success factors and program performance. We also are recommending that IRS ensure that its planned comparative study of using PCAs informs decision makers of all the program costs and the best use of those federal funds. In providing written comments on a draft this report (see app. V) the Commissioner of Internal Revenue agreed with our recommendations and outlined some actions IRS has initiated to respond to some of them. Although IRS’s actions do not guarantee PDC program success, IRS made significant progress in addressing the 5 critical success factors and 17 related subfactors before sending cases to PCAs for the limited implementation phase. Taken together, these actions were intended to achieve such important ends as ensuring that the selected PCAs will be able to do the job and work the range of cases assigned, that IRS will have the necessary resources and caseload ready to do its part, and that taxpayers’ rights and data will be protected. Even with this progress, IRS has not yet completed the related steps that it must take for 3 subfactors on setting goals and measures, determining all program costs, and evaluating the program. Having information on whether the program met its goals and desired results given the program costs would be critical for policymakers. In addition, IRS lacks clear criteria and processes for assessing how well it addressed the critical success factors and whether the program performance warrants expanding the number of PCAs and turning over more cases to them. It is understandable that IRS officials have focused on rolling out this new program and dealing with many pressing concerns such as making sure that the PCAs are ready and that IRS can do its part, while delaying work on these three subfactors and on the criteria and processes for deciding on future program expansion. However, if it waits too long, IRS risks not having critical information in a timely and cost-effective manner in order to answer important questions about whether the PDC program is producing desired results at acceptable costs and whether the program should be expanded. Having plans to answer these questions is especially critical now that lawsuit and bid protest delays have reduced the time that IRS has to collect and analyze performance data before having to make decisions about expanding the PDC program. Therefore, it is all the more important that IRS determine program costs and make decisions about its goals and measures, evaluation plans, approach to assessing critical success factors, and program expansion decision criteria as soon as possible. Related to such decisions on expansion is IRS’s planned comparative study of using PCAs. If this study is not adequately designed and implemented, policymakers may not be aware of the true costs of contracting with PCAs—including the fees paid to PCAs. They also would not be aware of the potential impact of increasing IRS funding, and thereby miss the opportunity to know whether contracting with PCAs is the best use of federal funds for meeting tax collection goals. To ensure that IRS decision makers will timely have the information needed to make informed, data-based decisions about the private debt collection program, we recommend that, as soon as possible, and certainly before any expansion of the PDC program beyond the initial round of cases sent to PCAs, the Commissioner of Internal Revenue complete establishing: results-oriented goals and measures for the program based on the best available information; reliable, verifiable information on all the costs of the program, to the extent possible; plans for evaluating the results of the program in terms of expected costs, goals, and desired results; and clear criteria and processes for assessing how IRS addressed the critical success factors in the limited implementation phase and whether PDC program performance warrants program expansion. We also recommend that, as IRS continues planning its comparative study of using PCAs, the Commissioner of Internal Revenue ensure that the study methodology and the IRS reports on the study results will inform decision makers of the full costs of the PDC program, including the fees paid to PCAs and the best use of those federal funds. The Commissioner of Internal Revenue provided written comments on a draft of this report in letter dated September 20, 2006 (which is reprinted with its enclosures in app. V). The Commissioner noted that he was pleased that our report acknowledges IRS’s accomplishments and steps to protect taxpayer data and rights. The Commissioner also noted that IRS agreed with our recommendations and had initiated efforts to address them, as discussed below. IRS agreed with the recommendation. In discussing our draft report with IRS officials, we clarified that the goals and measures should be logically linked to IRS’s five desired results and that IRS should document any indirect links and why more direct linkages were not made. In turn, IRS’s letter provided information on such linkages, including the indirect linkage for the desired result involving increased public confidence, and provided a revised version of our appendix IV (which we reprint with the Commissioner’s letter in app. V) with columns added to show the linkages between the desired results and the proposed goals and measures as they appeared in our draft report. Although we did not have time to fully review IRS’s information, we are gratified to see that IRS has established some program goals and measures and has made progress in developing the linkages. We look forward to IRS developing the related measures and data, such as for reducing the penalties and interest paid, better utilizing IRS staff, freeing up IRS staff to work more complex cases, and significantly reducing case backlogs. We also look forward to IRS identifying specific goals—referred to as “targets” in IRS’s comments— that IRS will strive to achieve beyond those listed in appendix IV. IRS agreed with our recommendation. In response to our draft report, IRS provided us documentation that it had implemented a system to track PDC program costs going forward from July 2006. In discussing our draft report with IRS officials, they said that IRS will face difficulties in estimating some of the of the PDC program costs incurred before the tracking system was established. Based on this new information, we revised our recommendation to state that IRS should complete establishing verifiable, reliable information on all the costs of the program, to the extent possible. IRS’s comments state that it will furnish reconstructed historical costs as soon as they are compiled. Although we look forward to receiving such cost information, we encourage IRS to use the cost information to manage and evaluate the PDC program and inform policymakers. IRS provided a combined response on this and the last recommendation dealing with the comparative study (which is discussed below). IRS agreed with our recommendation to evaluate the program, but did not provide any additional information on how it plans to do so. We look forward to IRS establishing and documenting specific plans for evaluating the program over time and reporting the evaluation results. IRS agreed with this recommendation and noted that its decision on whether to expand the PCA program will be driven by several factors, such as the composition of the inventory and cases to be worked by PCAs, IRS resource capacity, and PCA performance. We look forward to IRS finalizing and documenting the criteria and processes, which could consider factors listed in this report, such as PCAs’ treatment of taxpayers and taxpayer data, the tax amounts collected, and the cost of collecting the taxes. We also look forward to IRS documenting its criteria and processes for assessing the critical success factors. In agreeing with this recommendation, IRS noted that it has structured the study so that data can be analyzed with and without the PCA fees. In discussing the draft report with IRS officials, the officials said that the study will include an analysis of the PCA fees as costs, not as a reduction of gross revenue, and the study will project what IRS would have collected had those costs been used to fund IRS’s collection program. We look forward to receiving more information on IRS’s study approach and the study results as IRS begins the first study iteration in September 2006. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies to the Chairman and Ranking Minority Member, House Committee on Ways and Means; the Secretary of the Treasury; the Commissioner of Internal Revenue; and other interested parties. Copies will be made available to others upon request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512-9110 or brostekm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. To what extent will the Internal Revenue Service (IRS) have addressed the critical success factors before turning over collection cases to private collection agencies (PCA) for the limited implementation phase? How will IRS use the lessons learned from the limited implementation phase to assess critical success factors and program performance before full program implementation? Is the design of IRS’s planned study of using PCAs adequate to provide useful information to help determine whether contracting is the best use of federal funds for achieving tax collection goals? results orientation issues by establishing expected costs and desired results for the program; agency resources issues by estimating and funding IRS staffing needs to administer the program in the limited implementation phase; workload issues by selecting and analyzing cases to identify the types that should not be sent to PCAs and make needed changes to case selection programming before sending the cases; taxpayer issues by taking steps to obtain feedback on PCA employees’ treatment of taxpayers, provide taxpayers information on how to contact the National Taxpayer Advocate, and monitor PCAs’ phone calls with taxpayers; and evaluation issues by planning various ways to monitor PCAs, such as site reviews of training records and information systems records to ensure PCAs comply with related requirements. The extent to which IRS has addressed the critical success factors and the 17 related subfactors before turning over the cases is summarized in table 1. Objective 1: The Extent to Which IRS Will Have Addressed the Critical Success Factors and Subfactors Before Turning Over Cases to PCAs (cont’d) Objective 1: The Extent to Which IRS Will Have Addressed the Critical Success Factors and Subfactors Before Turning Over Cases to PCAs (cont’d) Objective 1: The Extent to Which IRS Will Have Addressed the Critical Success Factors and Subfactors Before Turning Over Cases to PCAs (cont’d) Objective 1: The Extent to Which IRS Will Have Addressed the Critical Success Factors and Subfactors Before Turning Over Cases to PCAs (cont’d) progress but did not yet have results-oriented performance goals and measures for the results IRS has said will come from the program; reliable, verifiable information on all PCA-related costs; and details on when and how evaluations would be done to determine whether the program met goals and expectations, in part because of a lack of complete results-oriented program goals and related performance measures. The following slides discuss IRS’s actions to address each of the 17 subfactors. Objective 1: Results Orientation Subfactor: Determine Expected Program Goals, Costs, and Overall Results IRS has completed steps to establish expected costs (see app. III) and desired results for contracting with PCAs. IRS officials said they had established private debt collection (PDC) program goals and related measures but were unable to document a complete list of goals and measures and their approval. Based on our feedback since March 2006, IRS has been revising these goals and measures and provided an updated revision in July 2006. Because of their draft status and late development, we have not fully analyzed them (app. IV lists the proposed goals and measures) but observed that not all these proposed measures have goals. Some goals are to be established based on actual PDC program performance in 2007. IRS officials said they need to work with the PCAs to establish a PCA employee satisfaction goal. We also observed that the proposed measures are not fully linked to the five desired results for the PDC program (as identified in IRS documents and officials’ statements). Because of the difficulty in directly making such linkages, using intermediate proxy measures is acceptable. IRS provided information in August 2006 on which measures were linked to the desired results but, as of September 15, 2006, had not yet documented the logic behind the linkages or whether more direct linkages could be made. Table 2 shows our preliminary observations on the extent to which the proposed measures link to the desired results. Objective 1: Results Orientation Subfactor: Determine Expected Program Goals, Costs, and Overall Results (cont’d) Objective 1: Results Orientation Subfactor: Determine Expected Program Goals, Costs, and Overall Results (cont’d) oriented measures are important for allowing organizations to track the progress they are making toward their goals and give managers crucial information on which to base their organizational and management decisions. Leading organizations recognize that performance measures can create powerful incentives to influence organizational and individual behavior and reinforce the connection between the goals outlined in strategic plans and the day-to-day activities of their managers and staff. Linking program performance to higher-level goals can provide a clear, direct understanding of how the achievement of the program’s goals will lead to the achievement of the agency's strategic goals. We look forward to receiving more information from IRS officials as they work toward documenting the final, approved PDC program goals and related performance measures and the measures’ linkages to the desired results. IRS has completed the tasks to establish contract provisions, performance standards, operational expectations, and rewards and disincentives. The Request for Quotations (RFQ, or contract solicitation) contains key contract elements of evaluation criteria, performance measurement, and compensation arrangements. IRS’s review and approval of PCAs’ operational plans, IRS meetings with PCAs in July 2006 and August 2006, and follow-up actions resulting from these meetings were to help clarify expectations. During development of the program, IRS officials consulted with selected PCAs and state and federal agencies that had contracted for debt collection, in part, to ensure IRS’s program would be designed to provide PCAs adequate latitude to achieve goals. For example, IRS’s contract will allow PCAs to vary in their practices, such as the frequency of attempted contacts with taxpayers, with the intent of enabling each PCA to utilize its competitive advantage. Also, IRS provided potential bidders for the PCA contract an opportunity to review and comment on any restrictions in the draft PCA procedural requirements. The PCAs provided no comments on the requirements. IRS has completed contracting process tasks designed to ensure that the selected PCAs are able to meet operational and performance expectations. IRS sought proposals from preapproved vendors listed in the General Service Administration’s (GSA) Federal Supply Schedule contract for tax collection services. GSA had already determined that listed vendors were capable of performing the work. The solicitation contained a detailed statement of work and required vendors to provide technical, past performance, and pricing information. IRS received 33 responsive proposals and evaluated all proposals using the three criteria specified in the solicitation: (1) relevant experience and past performance, (2) technical approach, and (3) management plan. These criteria are commonly used in government contracting. IRS selected three vendors to receive PCA contracts. The ninth-ranked vendor protested the evaluation, but GAO issued a decision on June 14, 2006, denying the protest and upholding the evaluation conducted by IRS. According to IRS officials, these committees serve as a means to inform IRS executives, including the Commissioner, and provide adequate assurance and opportunity for feedback on management’s commitment to the PDC program. IRS officials said the executive briefings would continue throughout the limited implementation phase. The RFQ task order requires PCAs to train all their employees before they begin any taxpayer collection activity, including training on taxpayer rights and privacy awareness. The RFQ requires the PCA employees to sign a form certifying that they completed the required training. The PCA must maintain these forms for review by IRS upon request. In response to our preliminary observations, IRS informed the PCAs that their employees must receive a proficiency score of 70 percent or better after training before being allowed to work on cases (as do IRS telephone collection employees) and plans to contractually require this test score threshold in future RFQs. Objective 1: Agency Resources Subfactor: Ensure Appropriate PCA Employee Training (cont’d) IRS plans to monitor PCAs’ performance through quality review assessments to identify trends and gauge training effectiveness. IRS will use the same quality review process for PCA cases that it uses for cases worked by its own employees. Prior to turning over cases to the PCAs, IRS officials did site visits to monitor initial PCA training sessions to ensure the content and delivery of training followed PCAs’ approved training plans. IRS officials developed a checklist for monitoring the PCA training. IRS has completed the tasks for computer systems data exchange, payment tracking, and account updating. IRS’s system for tracking payments and updating taxpayer accounts is the same as that used for other tax payments. In developing the management information system for handling PCA cases, IRS completed its system requirements, design, development, and testing activities in accordance with its approved methodology for acquisition of information systems. IRS began its “partial production phase” (a simulated version of the limited implementation phase using IRS staff rather than PCAs) in January 2006 to help test processes and procedures. Before turning cases over to PCAs, IRS tested its capability to electronically transfer encrypted case files to them. IRS has begun, but not completed, work to determine all the costs of the PDC program. Beginning July 2006, IRS has an accounting system that can be used to track program costs and established codes and procedures to track private debt collection program costs. However, since prior costs were not systematically tracked, IRS would have to use available historical cost data to determine the costs that were incurred prior to systematic tracking, including such costs as those of planning the program beginning as far back as October 2001. IRS officials said they are working to use available data to determine the historical costs. IRS provided us documentation on some of the these costs, but without supporting information, it was not possible for us to assess whether it captured all costs or if the costs provided were reliable. For the limited implementation phase, IRS will turn over to PCAs only cases that IRS currently is not working, including those “shelved” because of IRS’s inadequate resources to work them and those in the queue to be worked by IRS employees but not yet assigned. However, in full implementation, IRS officials said they may assign PCAs unassigned cases from the various types of cases that IRS employees might work. To reach case placement and collection goals for the limited implementation phase, IRS is increasing case age thresholds to 2 years since the case was put into its current status. IRS officials said they are planning more changes in the limited implementation phase, including further increasing case age and dollar thresholds. Objective 1: Workload Subfactor: Select Appropriate Type and Volume of Cases for PCAs to Work (cont’d) goals is uncertain. Some evidence suggests that PCAs are generally less successful as the age of debt increases. However, IRS officials said that the Financial Management Service within the Department of Treasury has been successful in using PCAs to collect debt in this age range. IRS has completed work on this factor with the following procedures: the RFQ requires PCAs to mail a letter to each taxpayer within 10 days of receiving a case for cases for which IRS provides a valid address; target PCA reimbursement rates reflect higher compensation for lower- dollar cases; the PCA Policy and Procedures Guide clarifies that PCAs are to perform searches to locate all taxpayers that do not respond to initial contacts; IRS’s quality review procedures include a check that cases are being worked actively; according to IRS officials, IRS plans to analyze these quality review data and other data reports to identify trends in working different types of cases; and PCAs are allowed to return accounts after 6 months, and IRS officials said that before approving returns, they will check whether PCAs had taken the appropriate collection actions. IRS has completed the steps intended to ensure that taxpayers are treated properly. IRS has developed procedures to protect taxpayers and to ensure taxpayers are treated properly, including the following: IRS will continue to require all new PCA employees to have background investigations, photo identifications, and training on taxpayer rights before they have access to taxpayer information; IRS will conduct taxpayer satisfaction surveys; IRS will monitor PCAs’ compliance through quality reviews of PCAs’ telephone calls and case documents; and IRS developed a formal complaint process for taxpayers to use based on input and comment from the National Taxpayer Advocate. IRS completed background investigations and monitoring of PCA employees’ training before turning over cases to PCAs. IRS has completed the steps intended to ensure the security of taxpayer information. For example, IRS completed site visits of the PCAs and performed its safeguard computer security evaluations. The PCA contract statement of work addresses security requirements by referring to compliance with information security guidance and by requiring minimum system capabilities, such as end-to-end encryption. perform the one type of performance monitoring that was to be done before turning over cases to PCAs: monitoring PCAs’ training of their own employees. As also discussed earlier, IRS has taken steps to implement various methods to monitor PCAs’ performance in working cases, including telephone monitoring and case quality reviews, and taxpayer satisfaction surveys. Our previous work has shown that evaluations are critical to ensuring that programs achieve desired results, government funds are well spent, and the agencies are held accountable for the performance and effectiveness of the programs they administer. As discussed on the next slides, IRS did not have specifics on how it will assess how critical success subfactors were addressed in the limited implementation phase. Without such assessments, IRS may lack information with which to better understand why goals (once they are established) were or were not achieved and to identify any needed adjustments. readiness for full implementation of the program, it is not clear when IRS will decide, in terms of addressing critical success factors, if it is ready to proceed with full implementation. IRS officials said that they intend to establish a date and performance criteria that would trigger a go/no go decision, but have delayed such work until after limited implementation starts in order to finish the tasks that must be done to turn over cases to the PCAs. Generally, IRS officials said they will collect information during the limited implementation phase to establish baselines, identify trends, and provide a basis for making changes, if needed, to the program. However, IRS officials could not cite specific circumstances that would cause IRS to discontinue or delay full implementation of the program. Officials said that before expanding the program, they would consider a variety of data or criteria, such as the amounts of collected taxes and indications of PCAs mistreating taxpayers or misusing tax data. IRS has not decided whether these targets would include the amounts of collected taxes compared to program costs, which was a reason for canceling the 1996 PCA program pilot. IRS did not have specifics on how and when collected information would be reviewed to identify and use the lessons learned from the limited implementation phase to ensure that the critical success factors have been addressed before IRS expands to full implementation. Objective 2: How IRS Will Use Limited Implementation Lessons Learned to Assess Critical Success Factors and Performance Before Full Implementation (cont’d) Specific plans for how and when IRS will make decisions about readiness on critical success factors and program expansion can help ensure that IRS has the data it will need in time to make those decisions. Because of implementation delays caused by a contract award lawsuit and bid protest, IRS will have 16 instead of 24 months to identify any needed adjustments and make decisions on expanding the program. For limited implementation, IRS will have 7 months experience—from September 2006 to March 2007—before issuing its next contract solicitation under its plans to have more PCAs working more cases by January 2008. As originally planned, IRS would have rolled out cases in January 2006. Our previous work has shown that data-based decision making is important for improving government operations and programs. Collecting and reviewing data, whether qualitative or quantitative, to help make decisions about expanding the PDC program will require resources as well as consideration of how to balance the costs and benefits of the data collection and review, including the risks of not ensuring that the critical success factors are adequately addressed or of ill-advised or premature expansion of the PDC program. The study design indicates that IRS will not count the fees paid to PCAs as program costs. IRS will subtract these fees from the tax debts collected and report the net dollars collected by PCAs. For example, if the study found that IRS's PDC program administration costs were $6 million, PCAs collected $100 million in tax debt, and PCAs were paid $24 million in fees, the study would compare only the net $76 million dollars that PCAs collected to all the dollars IRS could be expected to collect if the $6 million were spent on IRS’s collection program. Objective 3: IRS’s Planned Study of Using PCAs (cont’d) Although IRS officials said that data on fees to PCAs—$24 million as shown in the above hypothetical results—could be made available to decision makers in the study results, the study plan document is not clear on that point or whether the total costs of the program, to include the PCAs’ fees, will be made apparent in the study. While the study may produce useful information, it will not compare the results of using PCAs with the results IRS could get if it was given the same amount of resources, including the fees to be paid by the government to the PCAs. As a result, the IRS study will not meet the intent of our recommendation. Our previous work has shown that for informed decision making, agency managers and other stakeholders need reliable, valid data on the costs of government programs. Economic principles and government cost analysis criteria suggest that federal government costs and social costs should be considered in analyzing programs and policies. For example, a study that would meet the intent of our recommendation would compare the dollars collected by PCAs to the dollars that IRS could be expected to collect if the true costs to the government—such as the $6 million from the PDC program administration budget plus the $24 million in PCA fees (which are paid out of federal tax receipts) as shown in the above hypothetical example—were spent by IRS on working its next best cases, using the most effective strategy for identifying and working such cases. Objective 3: IRS’s Planned Study of Using PCAs (cont’d) IRS officials said that such a comparison is not realistic because Congress would not approve such a budget increase. As noted in our 2004 report, IRS officials said that the proposal that Congress authorize IRS to use PCAs arose, in part, because of the belief that Congress was not likely to provide the increased budget to hire enough IRS staff to work on the inventory of collection cases. IRS’s proposed study approach—by netting PCA fees from dollars collected by PCAs—apparently adopts IRS’s assumption about potential funding increases. However, unless Congress is fully informed on the true costs of the PDC program, and the potential impact of increasing collections funding, it will lack key information with which to make decisions on how federal funds can best be spent to meet tax collection goals, in concert with other information about trade- offs with other government programs. IRS officials stated that supplemental research efforts are being designed to identify the best use of PCAs among all cases in the collections inventory. The status and methodologies of these efforts are not clear because IRS has not yet provided us documents on them. To determine to what extent the Internal Revenue Service (IRS) addressed the critical success factors before turning over collection cases to the private collection agencies (PCA) we reviewed program documents and interviewed IRS officials. IRS agreed with the critical success factors we identified. We identified the approaches/methods IRS intended to use to address the factors and related subfactors and identified any steps IRS had remaining to address each factor before turning over cases to PCAs. We analyzed interviews and documents to identify any gaps in IRS’s approach, such as factors for which IRS lacked intended approaches/methods to address a factor, documented plans for completing steps, or details on how intended approaches/methods would be implemented. For selected subfactors related to areas for which we had related expertise and readily available criteria (government acquisition, information technology development and security, and financial management), we analyzed IRS’s program documents and compared IRS’s approach for addressing the subfactor to the criteria. For example, our information security staff reviewed IRS’s approach for addressing information security issues in light of Federal Information Security Management Act and National Institute of Standards and Technology requirements. We did not attempt to analyze how well IRS addressed the factors or whether IRS made the right decisions on issues such as PCA employees’ training or taxpayer protections. To determine how IRS will use the lessons learned from the limited implementation phase to assess the critical success factors and program performance before full program implementation, we interviewed IRS officials and reviewed available agency documents and plans. We focused on when and how, if at all, IRS would determine whether its approaches/methods for addressing the factors worked as intended; if program performance warrants program expansion; and what changes, if any, should be made before fully implementing the program. To determine whether IRS’s planned approach to study using PCAs will provide useful information with which to determine if contracting is the best use of federal funds for achieving tax collection goals, we reviewed program documents and interviewed officials from IRS supported by contractor staff assisting them in developing the study. We used data only as background for reporting and did not formally assess their reliability. To the extent possible, we corroborated information from interviews with documentation and, where not possible, we report the information as attributed to IRS officials. Although we obtained documentation that IRS had completed steps to address the critical success subfactors, we did not do detailed verification of the documents, in part due to the limited time we had between IRS completing and documenting some steps taken in preparation for turning the cases over to PCAs on September 7, 2006, and the due date of this report. We did our work from August 2005 to September 2006 in accordance with generally accepted government auditing standards. In full program implementation (inclusive of limited implementation phase) Proposed private debt collection program performance measure Number of cases placed with PCAs in first 12 months Percentage of cases placed with PCAs that are resolved Number/percentage of PCA cases recalled to IRS Number/percentage of PCA cases that are deemed currently not collectible Number/percentage of cases involving bankruptcies or decedents PCA time to close the case Amount of unpaid tax debts that are placed with PCAs Amount of unpaid tax debts that are collected Collection percent Percentage of unpaid tax debts placed with PCAs that Amount of PCA collections that IRS retains to fund collection enforcement activities Cases closed as fully paid Cases closed with an agreement to satisfy the taxpayer’s unpaid tax debt in 3 to 5 years Cases closed with an agreement to satisfy the taxpayer’s unpaid tax debt in more than 5 years Percentage of surveyed taxpayers responding that they were satisfied Satisfaction score for IRS employees in PDC program Satisfaction score for PCA employees working cases Accuracy score for PCA cases Timeliness score for PCA cases Professionalism score in PCA cases Verified major complaints against PCA employeesOverall percentage quality score for cases worked by PCAs Goals will be determined using experiences with PCA cases over the first year. Goals will be developed using IRS’s revenue projection model. Goals will be based on those used in the IRS telephone collection function. In addition to the contact named above, Tom Short, Assistant Director; John Davis; Charles Fox; Timothy Hopkins; Ronald Jones; Jeffrey Knott; Veronica Mayhand; Edward Nannenhorn; Cheryl Peterson; and William Woods made key contributions to this report.
In 2005, the inventory of tax debt with collection potential had grown to $132 billion. The Internal Revenue Service (IRS) has not pursued some tax debt because of limited resources and higher priorities. Congress has authorized IRS to contract with private collection agencies (PCA) to help collect tax debts. IRS has developed a Private Debt Collection (PDC) program to start with a limited implementation in September 2006 and fuller implementation in January 2008. As requested, GAO is reporting whether (1) IRS addressed critical success factors before limited implementation, (2) IRS will assess lessons learned before fuller implementation, and (3) IRS's planned study will help determine if using PCAs is the best use of federal funds. IRS made major progress in addressing the 5 critical success factors and 17 related subfactors for the PDC program before sending cases to PCAs. GAO reviewed program documents and interviewed officials to identify IRS's approaches and steps taken to address the factors. Taken together, IRS's actions were intended to ensure that the PCAs will be able to do the job and work the range of cases assigned, IRS will have the necessary resources and caseload ready, and taxpayer rights and data will be protected. Even with this progress, IRS has not completed work for three subfactors--setting results-oriented goals and measures, determining all PDC program costs, and evaluating the program based on the results-oriented goals and measures, once they are established. As a result, IRS risks not providing complete information that decision makers would find useful. Finishing work on the factors could help achieve but cannot guarantee program success, which also depends, in part, on how IRS addresses the factors and identifies and resolves any problems in the limited implementation phase. Although IRS officials indicated that a purpose of the limited implementation phase is to assure readiness for full implementation to up to 12 PCAs, IRS has not yet documented how it will identify and use the lessons learned to ensure that each critical success factor is addressed before expanding the program starting in January 2008. Because program success will be affected by how well IRS makes adjustments, assessing the lessons learned in limited implementation is critical. Also, IRS has not documented criteria that it will use to determine whether the limited implementation performance warrants program expansion. IRS officials indicated that they are considering criteria that could trigger a go/no go decision, such as the amount of taxes collected and indications of PCAs abusing taxpayers or misusing taxpayer data. IRS has not decided on whether these targets will include comparing the taxes collected to program costs, which was a key reason for canceling a 1996 PCA pilot program. Finally, IRS will have a little more than a half year to identify the lessons learned before incorporating them into the next contract solicitation, which IRS intends to release in March 2007. Related to such decisions on expansion is IRS's planned comparative study of using PCAs. That study is to compare using PCAs to investing IRS's PDC-related operating costs into having IRS staff work IRS's "next best" collection cases. Under the documented study design, IRS would exclude the fees paid to PCAs from the costs and subtract those fees from the tax debts collected by PCAs. While such a study might produce useful information, it will not compare the results of using PCAs with the results IRS could get if given the same amount of resources, including the fees to be paid to PCAs, to use in what IRS officials would judge to be the best way to meet tax collection goals. Adequately designing and implementing the study is important to ensure policymakers are aware of the true costs of contracting with PCAs and know whether PCAs offer the best use of federal funds.
information, raising concern over issues related to privacy and confidentiality. The federal system of protections was developed largely in response to biomedical and behavioral research that caused harm to human subjects. To protect the rights and welfare of human subjects in research, the Common Rule requires organizations conducting federally supported or regulated research to establish and operate IRBs, which are, in turn, responsible for implementing federal requirements for research conducted at or supported by their institutions. IRBs are intended to provide basic protections for people enrolled in federally supported or regulated research. Most of the estimated 3,000 to 5,000 IRBs in the United States are associated with a hospital, university, or other research institution, but IRBs also exist in managed care organizations (MCO), government agencies, and as independent entities employed by the organizations conducting the research. IRBs are made up of both scientists and nonscientists. The organizations that we contacted primarily conduct health research to advance biomedical science, understand health care use, evaluate and improve health care practices, and determine patterns of disease. These organizations use health-related information on hundreds of thousands, and in some cases millions, of individuals in conducting their research. The MCOs and integrated health systems in our study use medical records data, which are generated in the course of treating patients, to conduct epidemiological research and health services research, such as outcomes and quality improvement studies. For example, one MCO, in conducting a quality improvement study, determined from its claims database whether patients with vascular disease were receiving appropriate medications and reported the findings to patients’ physicians to assist in the treatment of their patients. The pharmaceutical and biotechnology companies that we contacted also conduct health services and epidemiological research; but unlike MCOs and integrated health systems, they rely on data from other organizations for this type of research. One pharmaceutical company’s epidemiology department, for example, conducts large-scale studies using data from MCOs and health information organizations to monitor the effectiveness of drugs on certain populations. For pharmacy benefit management (PBM) firms, which administer prescription drug benefits for health insurance plans, a primary source of data is prescription information derived from prescriptions dispensed by mail or claims received from retail pharmacies. PBMs design and evaluate programs that are intended to improve the quality of care for patients who have specific diseases or risk factors while controlling total health care costs. One PBM in our study, for example, develops disease management programs; these programs depend on the ability to identify individuals with conditions, such as diabetes, that require more intensive treatment management. The health information organizations that we contacted rely solely on data from other organizations. Typically, they collect medical claims data from their clients or obtain it from publicly available sources, such as Medicare and Medicaid. They may also acquire data through employer contracts that stipulate that all the employers’ plans provide complete data to a health information organization. Examples of research projects include studies of the effects of low birth weight on costs of medical care and the effectiveness of alternative drug therapies for schizophrenia. Officials at the organizations we contacted believe that many of these studies require personally identifiable information to ensure study validity or to simply answer the study question. For longitudinal studies, researchers may need to track patients’ care over time and link events that occur during the course of treatment with their outcomes. Researchers may also need to link multiple sources of information, such as electronic databases and patient records, to compile sufficient data to answer the research question. For example, officials at one health information organization stated that without patient names or assigned patient codes, it would not have been possible to complete a number of studies, such as the effects of length-of-hospital stay on maternal and child health following delivery and patient care costs of cancer clinical trials. Some of the research conducted by the organizations we contacted must conform to the Common Rule or FDA regulations because it is either supported or regulated by the federal government. Several MCOs obtain grants from various federal agencies, including the Centers for Disease Control and Prevention; one health information organization that we contacted conducts research for federal clients, such as the Agency for Health Care Policy and Research. Some organizations that conduct both federally supported or regulated research and other types of privately funded research choose to apply the requirements uniformly to all studies involving human subjects, regardless of the source of funding. However, some other organizations that carry out both publicly and privately funded research apply the federal rules where required, often relying on IRB review at collaborators’ institutions, but do not apply the rules to their privately funded research. Pharmaceutical and biotechnology companies, for example, rely on the academic medical centers where they sponsor research to have in place procedures for informed consent and IRB review, but they do not maintain their own IRBs. Some organizations conduct certain activities that involve identifiable medical information, but they do not define these activities as research. For example, officials at several MCOs told us that they did not define records-based quality improvement activities as research, so these projects are not submitted for IRB review. But there is disagreement as to how to classify quality improvement reviews, and some organizations do submit these studies for IRB review, where they define the studies as research. Finally, at some organizations, none of the research is covered by the Common Rule or FDA regulations and no research receives IRB review. For example, one PBM in our study, which conducts research for other companies—including developing disease management programs—does not receive federal support and, thus, is not subject to the Common Rule in any of its research. While it does not have an IRB, this PBM uses external advisory boards to review its research proposals. Another type of research that for some companies does not fall under the Common Rule or FDA regulations is research that uses disease or population-related registry data. Pharmaceutical and biotechnology companies maintain such registries to monitor how a particular population responds to drugs and to better understand certain diseases. While many organizations have in place IRB review procedures, recent studies that pointed to weaknesses in the IRB system, as well as the provisions of the Common Rule itself, suggest that IRB reviews do not ensure the confidentiality of medical information used in research. While not focusing specifically on confidentiality, previous studies by GAO and by the Department of Health and Human Services (HHS) Office of Inspector General have found multiple factors that weaken institutional and federal human subjects protection efforts. In 1996, we found that IRBs faced a number of pressures that made oversight of research difficult, including the heavy workloads of and competing professional demands on members who are not paid for their IRB services. Similarly, the Inspector General found IRBs unable to cope with major changes in the research environment, concluding that they review too many studies too quickly and with too little expertise, and recommended a number of actions to improve the flexibility, accountability, training, and resources of IRBs. Under the Common Rule, IRBs are directed to approve research only after they have determined that (1) there are provisions to protect the privacy of subjects and maintain the confidentiality of data, when appropriate, and (2) research subjects are adequately informed of the extent to which their data will be kept confidential. However, according to the Director of the Office for Protection From Research Risks (OPRR), confidentiality protection is not a major thrust of the Common Rule and IRBs tend to give it less attention than other research risks because they have the flexibility to decide when it is appropriate to review confidentiality protection issues. Consistent with federal regulations, the seven IRBs that we contacted told us that they generally waive the informed consent requirements in cases involving medical records-based research. Researchers at the organizations we visited contend that it is often difficult, if not impossible, to obtain the permission of every subject whose medical records are used. As an example, the director of research at one integrated health system described a study that tracked about 30,000 patients over several years to determine hospitalization rates for asthmatic patients treated with inhaled steroids. The IRBs that we contacted told us that they routinely examine all research plans using individually identifiable medical information to determine whether the research is exempt from further review, can receive an expedited review, or requires a full review. Further, in reviewing research using individually identifiable genetic data, two of the IRBs had policies to consider additional confidentiality provisions in approving such research. The actual number of instances in which patient privacy is breached is not fully known. While there are few documented cases of privacy breaches, other reports provide evidence that such problems occur. For example, in an NIH-sponsored study, IRB chairs reported that lack of privacy and lack of confidentiality were among the most common complaints made by research subjects. Over the past 8 years, OPRR’s compliance staff has investigated several allegations involving human subjects protection violations resulting from a breach of confidentiality. In the 10 cases provided to us, complaints related both to research subject to IRB review and to research outside federal protection. In certain cases involving a breach in confidentiality, OPRR has authority to restrict an institution’s authority to conduct research that involves human subjects or to require corrective action. For example, in one investigation, a university inadvertently released the names of participants who tested HIV positive to parties outside the research project, including a local television station. In this case, OPRR required the university to take corrective measures to ensure appropriate confidentiality protections for human subjects. In response, the university revised internal systems to prevent the release of private information in the future. However, in other cases, OPRR determined that it could not take action because the research was not subject to the Common Rule and, thus, it lacked jurisdiction. For example, in a case reported in the media, OPRR staff learned of an experiment that plastic surgeons had performed on 21 patients using two different facelift operations—one on each half of the face—to see which came out better. OPRR staff learned that the study was not approved by an IRB and that the patients’ consent forms did not explain the procedures and risks associated with the experiment. In addition, the surgeons published a journal article describing their research that included before and after photographs of the patients. Because the research was performed in physician practices and was not federally supported, it fell outside the Common Rule and OPRR could take no action. Each organization that we contacted reported that it has taken one or more steps to limit access to personally identifiable information in their research. Many have limited the number of individuals who are afforded access to personally identifiable information or limited the span of time they are given access to the information, or both. Some have used encrypted or encoded identifiers to enhance the protection of research and survey subjects. Most, but not all, of the organizations have additional management practices to protect medical information, including written policies governing confidentiality. Some organizations have also instituted a number of technical measures and physical safeguards to protect the confidentiality of information. Officials from two of the companies that we contacted told us that they did not have written policies to share with us, and two other companies were unable to provide us with such documentation, although officials described several practices related to confidentiality. The organizations that did provide us with documentation appear to use similar management practices and technical measures to protect health information used in their health research, whether they generate patient records or receive them from other organizations. relevant to their studies. In addition to limiting access to certain individuals for specific purposes, some organizations have encrypted or encoded patient information. Researchers at one integrated health system, for example, work with information that has been encoded by computer programmers on the research team—the only individuals who have access to the fully identifiable data. In conducting collaborative research, the organizations that we contacted tend to use special data sets and contracting processes to protect medical information. For example, one MCO, which conducts over half of its research with government agencies and academic and research institutions, transfers data in either encrypted or anonymized form and provides detailed specifications in its contracts that limit use of the data to the specific research project and prohibit collaborators from re-identifying or transferring the data. Generally, company policies define the circumstances under which personally identifiable information may be disclosed and the penalties for unauthorized release of confidential information. Most company policies permit access only to the information that is needed to perform one’s job; 8 of the 12 organizations also require their employees to sign agreements stating that they will maintain the privacy of protected health information. Each organization that we contacted said it uses disciplinary sanctions to address employee violations of confidentiality or failure to protect medical information from accidental or unauthorized access, and an intentional breach of confidentiality could result in employee termination—which may be immediate. But they also pointed out that few employees have been terminated, and when they have, the incidents were not related to the conduct of research. The organizations that we contacted said they use a number of electronic measures to safeguard their electronic health data. Most reported using individual user authentication or personal passwords to ensure users access only the information that they need; some also use computer systems that maintain an electronic record of each employee who accesses medical data. These organizations may also use other technical information system mechanisms, including firewalls, to prevent external access to computer systems. In addition to electronic security, officials at some of the organizations told us they use various security measures to prevent unauthorized physical access to medical records-based information, including computer workstations and servers. Personally identifiable information is often an important component of research using medical records, and the companies we met with furnished many examples of useful research that could not have been conducted without it. Because our study focused on only a limited number of companies—in particular, those that were willing to share information about corporate practices—it is difficult to judge the extent to which their policies may be typical, nor do we know the extent to which their policies are followed. Nevertheless, most of the organizations we surveyed do have policies to limit and control access to medical information that identifies individuals, and many of them have adopted techniques, such as encryption and encoding, to further safeguard individual privacy. However, while reasonable safeguards may be in place in these companies, external oversight of their research is limited. Not all research is subject to outside review, and even in those cases where IRBs are involved, they are not required to give substantial attention to privacy protection. Further, in light of the problems that IRBs have had in meeting current workloads—one of the key findings of our earlier work as well as the work of HHS’ Office of Inspector General—it is not clear that the current IRB-based system could accommodate more extensive review responsibilities. In weighing the desirability of additional oversight of medical records-based research, it will be important to take account of existing constraints on the IRB system and the recommendations that have already been made for changes to that system. This concludes my prepared statement. I will be happy to respond to any questions that you or Members of the Committee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the privacy of medical records used for health research, focusing on: (1) to what extent medical information used for research depends on personally identifiable information; (2) research that is and is not subject to current federal oversight requirements; (3) how the institutional review board (IRB) ensures the confidentiality of health information used in research; and (4) what steps organizations have taken to safeguard information. GAO noted that: (1) the survey revealed that a considerable amount of health research relies on personally identifiable information; (2) while some of this research is subject to IRB review--either because it is federally supported or regulated research or because the organization voluntarily applies federal rules to all of its research--some of the organizations conduct records-based research that is not reviewed by an IRB; (3) the process of IRB review does not ensure the confidentiality of medical information used in research--primarily because the provisions of the Common Rule related to confidentiality are limited; (4) according to recent studies, the IRB system on the whole is strained; and (5) nevertheless, although external review of their research is limited, most of the organizations in GAO's study told GAO that they have various security safeguards in place to limit internal and external access to paper and electronic databases, and many say they have taken measures to ensure the anonymity of research and survey subjects.
Since Guam became a territory in 1898, the United States has long maintained a significant military presence on the island to support and defend U.S. interests in the western Pacific Ocean region. Guam has been the home to many different military units over the past 60 years and was especially active during the Vietnam War as a way-station for U.S. bombers. DOD currently controls about 27 percent of the island. According to the 2010 U.S. Census, Guam had a population of 159,358, an increase of 2.9 percent from the 2000 Census population of 154,805. DOD estimates that there are at least 16,400 military members and their dependents stationed on Guam. Most of the military members and dependents are attached to one of the two major military installations on the island: U.S. Naval Base Guam, located on the southwestern side of the island at Apra Harbor, and Andersen Air Force Base in the north (see figure 1). In 2004, the U.S. Secretaries of State and Defense and the Japanese Ministers of Foreign Affairs and Defense began a series of sustained security consultations aimed at strengthening the U.S.-Japan security alliance and better addressing the rapidly changing global security environment. The resulting U.S.-Japan Defense Policy Review Initiative established a framework for the future of the U.S. force structure in Japan and facilitated a continuing presence for U.S. forces in the Pacific theater, including the relocation of military units to Guam. The major realignment initiatives of the Defense Policy Review Initiative were ultimately outlined in May 2006 in a Security Consultative Committee document, the United States-Japan Roadmap for Realignment Implementation (2006 Roadmap)—under which the United States anticipated relocating approximately 8,000 Marines and their estimated 9,000 dependents from Okinawa, Japan, to Guam by 2014. The 2006 Roadmap was subsequently modified by the Security Consultative Committee in April 2012 and DOD’s current plan is to relocate approximately 5,000 personnel (mostly rotational) and 1,300 dependents to Guam as soon as appropriate facilities are available to receive them. DOD is in the process of determining what military and public infrastructure facilities and live-fire training ranges are necessary to support the proposed reduced realignment plan on Guam, as well as Tinian and Pagan—islands that are part of the Commonwealth of the Northern Mariana Islands. Before any Marines can relocate to Guam, DOD must examine the environmental effects of its proposed actions, pursuant to the National Environmental Policy Act of 1969. To address this requirement in the past, DOD performed an environmental review of certain proposed actions under the original 2006 realignment plan and released the Guam and Commonwealth of the Northern Mariana Islands Military Relocation Final Environmental Impact Statement (EIS) in July 2010. In September 2010, the Department of the Navy announced in the Record of Decision for the Guam and Commonwealth of the Northern Mariana Islands Military Relocation that it would proceed with the Marine Corps realignment, but it deferred the selection of a specific site for a live-fire training range complex on Guam pending further study. In February 2012, the Department of the Navy gave notice that it intended to prepare a supplemental EIS to evaluate locations for a live-fire training range complex on Guam. In October 2012, as a result of the announcement of the revised realignment plan, the Department of the Navy gave notice that it was planning to expand the scope of the ongoing supplemental EIS to also evaluate potential environmental consequences from the construction and operation of the main Marine installation. According to DOD, the reduction in the number of Marines and dependents to be relocated to Guam led to a reduction in the amount of land needed for the main Marine installation area, enabling the Navy to identify and consider other alternatives than it had previously analyzed for the 2010 EIS. The expanded supplemental EIS is expected to have three major components. 1. Further evaluation of possible locations for the establishment of a Marine Corps live-fire training range complex on Guam, to include locations on Naval Computer and Telecommunications Station Finegayan, Naval Magazine Guam, and the northwest part of Andersen Air Force Base, among other locations. 2. Determination of the potential environmental consequences of constructing and operating a main Marine Corps installation at several possible locations on Guam: Naval Base Guam, Naval Computer and Telecommunications Station Finegayan, South Finegayan, federal land in the village of Barrigada, and Andersen Air Force Base. 3. Assessment of associated impacts to Guam’s public infrastructure. According to Marine Corps officials, the supplemental EIS is expected to be drafted by 2014, and DOD anticipates that a final decision on all matters being evaluated will be released by 2015 via a record of decision. Several federal agencies have been assisting the Government of Guam in planning and preparing for the realignment of U.S. forces (see figure 2). Within DOD, three organizations have been assisting Guam to prepare for the military realignment. The Office of Economic Adjustment (OEA) is a DOD field activity that reports to the Deputy Under Secretary of Defense for Installations and Environment, within the office of the Under Secretary of Defense for Acquisition, Technology, and Logistics. The office facilitates DOD resources in support of local programs and provides direct planning and financial assistance to communities and states seeking assistance to address the impacts of DOD’s actions. OEA’s assistance to growth communities is primarily focused on assisting them with organizing and planning for population growth because of DOD activities, commonly referred to as “defense-affected” communities. The Joint Guam Program Office (JGPO) is the DOD office primarily engaged in developing and implementing the military realignment plans. JGPO is a Navy staff office under the direct oversight of the Assistant Secretary of the Navy for Installations and Environment. Specifically, JGPO is leading the coordinated planning efforts among the DOD components and other stakeholders to consolidate, optimize, and integrate the existing DOD infrastructure capabilities on Guam. JGPO also leads the effort to develop the ongoing supplemental EIS. The Naval Facilities Engineering Command contracts for the military construction on Guam and, as the Navy’s primary facilities and utilities engineering command, is also helping to prepare the supplemental EIS. The Secretary of the Interior has administrative responsibility for coordinating federal policy for U.S. insular areas, including Guam,regarding all matters that do not fall within the programmatic responsibility of another federal department or agency. Within the Department of the Interior, the Office of Insular Affairs executes these responsibilities. Part of the Office’s mission is to empower insular communities by improving the quality of life, creating economic opportunity, and promoting efficient and effective governance. The Federal Regional Council—Region IX is a consortium of 19 federal departments and agencies that oversee federal activities in four western states and the Outer Pacific Islands including Guam. Federal Regional Councils were established to provide a structure for interagency and intergovernmental cooperation. Membership includes regional representatives from the Departments of Agriculture, Commerce, Education, Energy, Health and Human Services, Homeland Security, Housing and Urban Development, Justice, Labor, the Interior, Transportation, Veterans Affairs, and the Environmental Protection Agency (EPA). The goal of the Federal Regional Council is for federal departments in Region IX to work in a coordinated manner in order to make federal programs more effective and efficient, through the establishment of task forces and development of reports on issues of concern in the region. The federal regional council meets monthly and has six committees focused upon broad geographic areas and special populations in the vast geographic area of Region IX. One of the committees, the Outer Pacific Committee, contains the Guam Buildup Committee/Task Force. The buildup task force’s mission is to help Guam develop a financial assistance strategy and serve as a communication liaison regarding local needs on Guam and federal budget decision- making. Guam became a U.S. possession in 1898, initially placed under the control of the U.S. Navy. The Guam Organic Act of 1950 conferred U.S. citizenship on Guamanians and established the territory’s government. Guam’s government is organized into three branches: executive, legislative, and judicial. The executive branch is led by the territory’s highest elected officials: the governor and lieutenant governor. These officials implement Guam’s laws through the departments, bureaus, agencies, and other entities that make up the executive branch of the Government of Guam, such as the departments of public health and social services and education. The legislative branch consists of a single chamber legislature presently with 15 members who are elected for 2- year terms. The judiciary consists of the Superior Court of Guam and the Supreme Court of Guam. In addition, the following are several autonomous agencies related to public infrastructure that function as part of the Government of Guam: The Guam Power Authority manages the generation, transmission, and distribution of electrical power on the island to include engineering, operation, and maintenance activities. The Guam Waterworks Authority manages the engineering, operation, and maintenance of the public water and wastewater systems including the sources, treatment, distribution, and storage. The Port Authority of Guam operates and maintains the Port of Guam. The Power and Waterworks authorities are governed by an elected, non- partisan, five-member Consolidated Commission on Utilities. The Port of Guam is presided over by five board members appointed by the Governor of Guam with the advice and consent of the legislature. Each of the agencies collects fees for its services and is able to issue bonds based on these fees and other revenue to finance infrastructure improvements. While some investments have been made to improve Guam’s public infrastructure in recent years, many deficiencies continue to exist. The reliability, capacity utilization, and age of much of Guam’s public infrastructure indicates a need for additional upgrades to ensure that Guam can meet the demands of its current and future population, regardless of how many Marines and dependents are moved to Guam. For example, existing utility systems—electric power generation, potable water production, and wastewater collection and treatment—are largely operating at or near their maximum capacities and will require infrastructure improvements to meet any increase in demand. In addition, some of Guam’s public infrastructure sectors, such as its Waterworks Authority face issues complying with federal regulations. Other sectors, such as the fire and police departments are experiencing shortages in infrastructure, vehicles, and staffing. According to JGPO officials, they intend to perform assessments to determine what improvements are needed by Guam’s public infrastructure to support the current realignment plan during DOD’s development of the supplemental EIS expected to be completed by 2015. A discussion of DOD’s actions to assess Guam’s public infrastructure is presented later in the report. Guam’s electric power system has experienced reliability problems, which have resulted in power outages, and is reliant on aging generators approaching the end of their life expectancy. The Guam Power Authority has made investments in its infrastructure to address some of these reliability problems. For example, it secured $206.5 million in bond financing in fiscal year 2010 to construct a new administration building and to make various generation, transmission, and distribution facility improvements. However, during our April 2013 visit to Guam, Power Authority officials indicated that system reliability continues to be a major concern because the Authority is not able to meet all of its operation and maintenance needs and may not be able to invest in its generators at appropriate levels due to diminished revenues. Officials also noted that multiple improvements are needed to the Authority’s peaking and emergency generators, but such improvements will have to be deferred until revenues improve—which directly affects the Authority’s ability to reduce customer outage duration and frequency. The electrical system’s reliability and age have led to five island-wide power blackouts since November 2010. On November 3, 2010, a power outage occurred due to a line that fell at a substation. This outage created a chain reaction that resulted in an island-wide blackout. Power was fully restored after 7 hours. On May 9, 2011, a power outage occurred due to a corroded static line that fell on the switchyard. The result of the outage was an island- wide blackout. Power was fully restored after 5 hours. On June 4, 2011, a power outage occurred due to a damaged control air pipe at the Marianas Energy Company, an Independent Power Producer which resulted in an island-wide blackout. Power was fully restored after 2.5 hours. On June 6, 2013, a power outage occurred due to a fault in the system that originated within the Dededo combustion turbine. The result of the outage was an island-wide blackout. Power was fully restored within 6 hours. On July 11, 2013, a power outage occurred due to a generator going off line. The loss of this generator and subsequent issues with the power generation system led to the outage. The result of the outage was an island-wide blackout. Power was fully restored in about 6.5 hours. The concerns expressed by Authority officials are consistent with the findings of a 2012 Department of the Interior Inspector General report. That report found that Guam is susceptible to power blackouts and noted that about a quarter of the Power Authority’s generation units were installed before 1976 (see figure 3 for a photograph of an electrical power station location on Guam). The report concluded that if the Authority should have to replace its entire aging infrastructure at the same time, it would require a large financial investment. The Power Authority provides all of the electricity on the island for both the public and DOD, with DOD the Authority’s largest customer accounting for 22 percent of the Authority’s fiscal year 2012 revenues. In terms of infrastructure needed for the realignment, Guam Power Authority officials told us that although the Authority has enough installed capacity (i.e., capacity the generation units were built to produce) to meet DOD’s realignment needs for electricity generation, some of the units are not operational without major repairs or improvements which will likely have to be made by the Authority. In addition, Authority officials told us improvements to the transmission system, such as additional substations and transmission lines, will need to be made to accommodate the revised realignment plan and will likely need to be funded by DOD. Guam’s water and wastewater treatment systems have a number of deficiencies as a result of natural disasters, poor maintenance, and vandalism. Although the Guam Waterworks Authority invested more than $158 million in improvements to its water and wastewater systems over the last 10 years, the Authority continues to operate under an order issued by the U.S. District Court for the District of Guam requiring various treatment and infrastructure improvements because of issues related to compliance with the Safe Drinking Water Act and the Clean Water Act. Potable water: According to Waterworks officials, Guam’s potable water system currently is in noncompliance with the Safe Drinking Water Act. The unreliable drinking water distribution system has historically resulted in bacterial contamination from sewage spills, causing “boil water” notices to be sent to residents. According to a 2012 EPA report, in general, many of the potable water facilities on the island are in poor operating condition as a result of minimal preventative and corrective maintenance actions. For example, several of the finished water storage tanks do not provide many of the normal functions of a well-designed and operated water system, and most of the storage tanks are old and deteriorating, have openings and/or leaks, and are susceptible to contamination. According to the EPA, part of the water supply problem stems from some of the water system’s old pipes. Distribution lines are repeatedly patched—with some single lengths of pipe having up to 7 patches—instead of being replaced. As a result of problems with distribution lines and maintenance, among other issues, the EPA estimates the Water Authority’s water loss rate is about 50 percent. According to EPA, studies indicate that the national average water loss rate is about 14 percent. Wastewater: According to the DOD Inspector General, Guam’s existing wastewater plants do not meet primary treatment standards and lack sufficient capacity due to poor conditions of the existing assets. For example, the Northern District Wastewater Treatment Plant (see figure 4) has a legacy of deferred maintenance and minimal capital improvements that have caused the systems to slowly deteriorate over the years. In addition to not meeting primary treatment standards, according to the EPA, Guam’s wastewater facilities, do not meet the requirements of its secondary treatment permit. Since 1986, Guam has had variances under section 301(h) of the Clean Water Act, allowing it to discharge primary treated wastewater to Hagatna Bay and the Philippine Sea. However, in November 2011, the EPA disallowed the variances and therefore established full secondary treatment requirements at both the Northern District wastewater treatment plant and the Hagatna plant in the island’s central region. According to the Chairman of the Guam Consolidated Commission on Utilities, achieving full secondary treatment at both plants will require between $300 million and $500 million in infrastructure improvements and, if funded by Guam alone, would necessitate rate increases that could potentially lead to average monthly water bills of $250 by 2020—double what the average is now. According to the Chairman, the Commission is currently negotiating with EPA on timelines for achieving secondary treatment and hopes to extend the timelines so as to allow more time to obtain additional funding. In terms of supporting the current military presence on Guam, the Waterworks Authority provides wastewater services to Andersen Air Force Base (including Northwest Field), Naval Computer and Telecommunications Station– Guam, and South Finegayan Navy housing. Naval Base Guam handles all of its own wastewater needs, and both the Navy and Air Force get their potable water from their own wells and the Fena Reservoir. Therefore, DOD only accounted for 2.2 percent of the Authority’s fiscal year 2012 revenues. However, according to representatives of Guam’s legislature and the Chairman of the Consolidated Commission on Utilities, the Guam Waterworks Authority is operating near capacity and cannot meet any surge in demand related to the realignment without significant infrastructure improvements. According to the Port Authority of Guam and DOD officials, the Port of Guam (see figure 5) is currently outdated, in need of repairs, and requires expansion in order to support the realignment. Of particular concern is the port wharf. A 2012 DOD inspector general report found that the structural integrity of the commercial wharf, which includes the port’s six berths, is damaged and at risk of failure. The current state of the wharf was caused by a lack of adequate repairs to damages from earthquakes, corrosion, and stresses from ships and cargo-handling equipment. There have been multiple earthquakes in Guam, with the most devastating taking place in August 1993. As a result of this earthquake, the island sustained massive devastation with significant damage to one of the port’s berths, which required major reconstruction. Although other port berths were also damaged, they were not reconstructed. The DOD Inspector General also found that there are multiple continuous defects that have been documented in various reports and surveys performed on the structural integrity of the wharf bulkhead at the Port—the bulkhead is the vertical face (or wall) of the wharf along which ships are berthed. The report and surveys indicate that the bulkhead was damaged both above and below the water. On the facade, the sides and the surface show cracks and separation, while underwater there is extensive damage to the concrete bulkhead. Both DOD and the Government of Guam have identified the Port of Guam as a potential choke point as the realignment moves forward since all materials needed for both military and public construction projects will be transported to Guam by sea and enter through the Port. Port Authority officials told us that the Port has not been modernized since it was constructed in the 1960s and that, typically, most ports are modernized every 20 years. According to Port Authority officials, to accommodate the realignment, the Port requires building modifications, selected modifications to yards where cargo is offloaded, and facility expansion, as well as significant structural integrity improvements to the wharf. For example, the port requires a lot of maintenance due to corrosive ocean waters, typhoons, earthquakes, and years of maintenance neglect. Some of the specific improvements identified include site expansion, utilities upgrades, bulkhead fortification, and building renovations. In 2010, DOD provided the Port $50 million in funding to begin some of these improvements. According to DOD officials, this $50 million is the amount needed to directly address the requirements of the realignment and will be used for building modifications and modifications to the yard where cargo is offloaded as well as expansion of selected port facilities. In addition, the Port Authority continues to seek non-federal funding sources to allow for successful operations and execution of its mission. For example, according to Port Authority officials, the Authority recently obtained a $10 million commercial loan to address pier service life extension, financial management system upgrades, and a cargo handling crane purchase and has proposed a 5.65 percent tariff increase across the board to increase revenues. However, according to Port Authority officials, if future military activity requirements extend beyond the Authority’s current planned upgrades, there may be a need for additional federal support to accommodate the increased capacity requirements. According to OEA officials, Guam’s current landfill is environmentally- compliant with sufficient capacity to meet current solid waste disposal needs and has sufficient expansion capacity to meet future needs related to the realignment. Previously, the EPA found Guam’s public solid waste operations to be in violation of the Clean Water Act as the Ordot Dump facility, located in the center of the island, was discharging contaminants into the Lonfit River. As a result of a lack of remediation and other actions on the part of the Government of Guam in response to the contamination, in March 2008, the U. S. District Court placed the public solid waste operations under the control of an appointed receiver. Since the court order, the receiver has opened a new public landfill and ceased operations at the Ordot Dump facility. According to the Government of Guam, the new landfill bans certain types of waste, including construction and demolition waste. As a result, future organic and realignment solid waste disposal needs will require the government to continue to develop systems to handle landfill-banned waste and construct and open new solid waste disposal areas at the landfill. Historically, the Government of Guam and DOD used separate solid waste facilities. The Government of Guam disposed of all civilian waste at the Ordot Dump facility and DOD disposed of its solid waste in one of two DOD-operated landfill sites—Andersen Air Force Base and Naval Base Guam. However, the two DOD-operated landfill sites are almost at capacity. DOD has begun sending its solid waste to the new public landfill, paying the current rates set by the receiver. The Government of Guam reported, as part of the ongoing supplemental EIS, that its public health system is undersized for the population it is to serve and is experiencing staff shortages. The following are examples: The Guam Memorial Hospital Authority, Guam’s only public civilian hospital, is often over capacity and this is exacerbated by the fact that it usually does not have enough nursing staff to operate all of its available acute care beds. According to Guam Memorial Hospital Authority officials’ response to a questionnaire conducted for the supplemental EIS, Guam needs approximately 500 acute care beds to fully meet the island’s needs using national hospital standards and the Guam Memorial Hospital Authority provides the Guam community with only 162 of those acute care beds. Therefore, according to these officials, the shortage can only be addressed by either expanding the existing Guam Memorial Hospital, building a much larger replacement public civilian hospital, or through the building of a new private hospital that is currently underway. Though there are no current plans to build a new civilian public hospital at this time, the Guam Memorial Hospital Authority is in the process of implementing its 2013 Strategic Plan, which includes identifying the Authority’s future expansion needs. The Government of Guam has tried to address some of the hospital’s space issues. For example, in fiscal year 2009, it secured $11 million in bond financing to fund certain infrastructure improvements for the Guam Memorial Hospital Authority, including the expansion and renovation of the emergency department and critical care unit/Intensive care unit, the upgrading of its pharmacy department, and the modernization of two hospital elevators. Guam’s current mental health and substance abuse facility also faces issues with meeting standards of care. In 2004, the U.S. District Court issued a permanent injunction against Guam’s Department of Mental Health and Substance Abuse and various Guam officials to address deficiencies in the proper care for the mentally ill and developmentally disabled to address violations of statutory and constitutional standards of care. To achieve compliance with the injunction and address continuing problems, the court appointed a federal management team in 2010 and gave the team control over Guam’s mental health agencies to remedy the deficiencies. According to Government of Guam officials, as a result of the injunction, the Guam Behavioral Health and Wellness Center had to hire additional staff and implement several new substance abuse treatment programs. The officials further explained that to fully implement the mandates of the injunction, a new mental health facility will need to be constructed. On August 22, 2012, the U.S. District Court established a transition period for the return of duties and powers from the federal management team. In January 2013, the federal management team and Guam Behavioral Health and Wellness Center officials presented a transition plan to the court and control was transferred to the Guam Behavioral Health and Wellness Center in February 2013. According to Government of Guam officials, the Guam Behavioral Health and Wellness Center continues to report on a quarterly basis to the U.S. District Court. In addition to these infrastructure challenges, officials identified a number of challenges related to staffing. For example, Guam has experienced difficulty in recruiting and retaining an adequate number of health care personnel. According to the U.S. Department of Health and Human Services, Guam has been designated as a medically underserved area. Medically underserved areas are areas designated as having too few primary care providers, high infant mortality, high poverty, and/or high elderly population. Likewise, Guam also qualifies as a health professional shortage area, which is a geographic area, population group, or health care facility that has been designated by the federal government as having a shortage of health professionals. According to Guam public health officials, because of this designation, certain health professionals (e.g., nurses, mid-level providers, chiropractors, health dentists, and psychologists) can apply to work at Guam medical facilities and have the federal government pay for relocation costs and school loans. Generally, it is a 4-year program and people stay for the length of the term but then move away, resulting in turnover that makes it difficult to provide stable care. As we previously found, this is particularly true for insular areas such as Guam because citizens of insular areas are free to migrate to the United States, making it difficult to retain highly educated or skilled workers. Military personnel and their dependents generally do not use Guam’s health facilities, other than the occasional emergency room visit. However, the Government of Guam anticipates that any DOD civilian or migrant and construction workers associated with the realignment would use the facilities. Guam officials also told us that the island lacks a Center for Disease Control and Prevention level 2 public health lab.Guam is expected to become a focal point for regional job seekers and foreign construction workers under any realignment scenario, officials told us the island must have the ability to test for and contain various communicable diseases, due to this increase in migration. Government of Guam officials told us, currently, the nearest lab is in Hawaii and many times samples are spoiled and not testable by the time they arrive in Hawaii from Guam. The Guam Police Department is experiencing deficiencies in infrastructure, vehicles, and staffing. In terms of infrastructure, according to Police Department officials, although its four precinct buildings are in good to fair condition, the Police Department does not have a permanent headquarters building or location. The current police headquarters is located on land owned by the Guam International Airport Authority, which wants the Police Department to vacate the facilities so it can redevelop the property. In addition, according to Guam officials, the adult corrections facility is in poor physical condition, overcrowded, poorly designed, and inefficient. The officials also noted that the judicial center needs at least two additional courtrooms to support current needs and normal population growth. In addition to these infrastructure challenges, Police Department officials identified a number of challenges in servicing the public because of limited equipment and staffing. According to Guam Police Department officials, the Guam Police Department does not have enough vehicles to fully equip all shifts and have vehicles in reserve for downtime. Police Department officials estimate they need 18 more patrol vehicles to address the vehicle shortage. Likewise, Guam Police Department officials estimated that they need about 160 additional officers to appropriately serve the public. According to Guam Public Law, each village must have a minimum of 2 police officers capable of patrolling and responding to calls at all times, and 1 additional officer is required for each additional 2,000 residents for each shift. Therefore, according to the Department of the Interior Inspector General, the Guam Police Department should have 464 patrol officers to cover all of its precincts—it currently has 304. According to Guam Police Department officials, the personnel shortfall has caused the department to exceed its overtime budget annually due to excessive overtime work needed to sustain its operations. More recently, in a questionnaire conducted for the ongoing supplemental EIS, Police Department officials stated that prolonged work hours and excessive workload are causing fatigue and unhealthy physical conditions among the personnel. To address its staffing shortage, the department is currently deputizing civilians. According to Guam Police Department Officials, the Guam Police Department is training these civilians and giving them law enforcement authority to perform as police officers, but without compensation. The program has over 100 volunteer members and is increasing. Military personnel, their dependents, and any contractor or construction worker arriving on Guam because of the realignment would necessarily need to rely on the Guam Police Department in case of emergencies. In addition, any person—including military members and their dependents— who is cited or arrested by the Guam Police Department for violating local laws off-base would be processed or prosecuted through Guam’s legal system. As such, Police Department officials told us that any population increase associated with the revised realignment plan would exacerbate the current infrastructure, vehicle, and staffing challenges the department is experiencing and could potentially create new ones as shifts in traffic patterns and land use occur, for example, because of new commercial development, and higher-density housing. Like the Guam Police Department, Guam’s Fire Department is experiencing deficiencies in infrastructure, vehicles, and staffing. In terms of infrastructure, Fire Department officials told us that the department is currently leasing office space as it does not have a permanent headquarters location. In addition, according to a 2012 Department of the Interior Inspector General report, the Guam Fire Department does not have enough ambulances to service Guam and does not have any reserve vehicles in its fleet. At the time of that report, the Fire Department owned 15 ambulances, and of those, only 3 were in service. Further, according to the report, there has been at least one documented occasion in which the Fire Department had only 1 ambulance to service the entire island. The report also noted that although the Fire Department owns 12 fire trucks, none have ladders with high-rise capabilities to service hotels and other high-rise structures on the island. In addition to these infrastructure and equipment challenges, Fire Department officials identified a number of challenges related to staffing. For example, Fire Department officials told us that the department does not have enough staff to meet its current needs and that its staffing numbers have dropped below the National Fire Protection Association safety standards which requires a minimum of four personnel on a fire truck. The Fire Department currently has about 250 uniformed firefighters. According to the Chief of the Fire Department, they require about 72 additional firefighters in order to satisfy the 5 to 6 on-duty personnel per engine company standard of the National Fire Protection Association. Officials stated that they are slowly trying to address their personnel issues and have received funding from Government of Guam to hire more firefighters. For example, in fiscal year 2013 the Government of Guam provided the Fire Department approximately $1.8 million for hiring personnel and the Fire Department added 28 new firefighter recruits. Military personnel and their dependents living off base and any contractors or construction workers associated with the realignment would require the services of the Guam Fire Department for emergencies. As such, like the Guam Police Department, Fire Department officials told us that any population increase associated with the revised realignment would exacerbate the current infrastructure, vehicles, and staffing challenges the Fire Department is experiencing. The Guam Department of Education has been challenged in meeting its requirements to effectively maintain its facilities with adequate staffing, buses, and supplies, citing continuing budget constraints. In terms of infrastructure, the Department of Education through a public/private partnership constructed five new schools. Additionally, the Government of Guam has secured funds for school infrastructure improvements through bond financing. For example, the Government of Guam in fiscal year 2010 secured approximately $50 million in bond financing for the construction of a new high school and in fiscal year 2007 secured approximately $27 million in bond financing for school improvements including Americans with Disabilities Act compliance, asbestos abatement, security and fire alarm systems installation, and other improvements. The Guam Department of Education also received $75.7 million in fiscal year 2010 through the American Recovery and Reinvestment Act State Fiscal Stabilization Fund. The funding that was received through the American Recovery and Reinvestment Act was focused on improving the existing facilities. A significant portion of the funding went to repairing all of the roofing at all of the schools, upgrading electrical and fire alarm systems, replacing air conditioning units, and renovating a middle school. The Army Corp of Engineers recently completed a study commissioned by the Department of the Interior which estimated $90 million in deferred maintenance costs. The Guam Department of Education has been working with Guam’s Legislature and Governor’s Office to identify funding sources to repair and renovate aging school facilities. Although efforts have been to improve the Department of Education’s infrastructure, the department continues to face staffing challenges. According to Department of Education officials, with an extremely limited pool of applicants, the supply of fully certified, highly qualified teachers on Guam continues to be an issue since teachers on the island can apply to and be hired by the Department of Defense school system. In addition, Guam officials also noted that the school system is experiencing a shortage of school buses and that each of its buses averages five trips per day to transport the island’s children to and from school. Generally, children of military service members and DOD civilians attend DOD schools, but it is anticipated that any children of temporary workers associated with the realignment would attend Guam schools. Historically, the majority of DOD’s support to defense-affected communities has been to provide technical assistance and support community planning and coordination efforts due to Base Realignment However, OEA officials identified a few and Closure (BRAC) decisions.examples in the past where DOD has provided direct funding to defense- affected communities to provide additional capacity specifically needed to support DOD growth. DOD’s position has been that existing federal programs should be leveraged as much as possible to pay for public infrastructure needs and that local communities should largely be responsible for obtaining funding for their public infrastructure requirements. The Government of Guam has obtained non-DOD federal funding for some public infrastructure projects through several federal programs, such as a grant from the EPA to improve the water system. In addition, local communities can also raise their own funds for public infrastructure projects. However, in the case of Guam, some challenges have been identified as affecting its ability to raise funds for such projects. OEA is the primary DOD office responsible for providing assistance to communities, regions, and states affected by significant DOD program changes. A majority of OEA’s support to defense-affected communities has been for community planning and coordination efforts because of BRAC decisions. For example, from 2005 through 2012, OEA provided $76 million in grants to communities affected by BRAC decisions for activities ranging from hiring planners and staff to developing land reuse and redevelopment plans. Much of OEA’s assistance in the past was directed toward communities that lost military and civilian personnel because of the closure or major realignment of a base. However, because the 2005 BRAC round and other DOD initiatives created significant growth at many bases, OEA also has assisted defense- affected communities with growth planning. For example, one defense- affected community used OEA funding to hire personnel, maintain offices, and conduct planning. Another community’s local redevelopment authority used OEA funding to hire dedicated professional staff and contract with a consultant to prepare a redevelopment plan. For each community it assists, OEA assigns a project manager who can provide assistance in a variety of ways. OEA can provide funds for hiring consultants to assist in developing a reuse plan, information on federal grant money or other available resources, and information on best practices used by other closure communities. In addition, OEA’s website provides reports containing lessons learned from other communities and information on other available resources and OEA is currently developing a community forum function on its website where community members can exchange ideas and learn from each other’s experiences. OEA has generally provided funding for technical assistance, but it also has provided public infrastructure funding to local communities. For example, OEA officials noted the public infrastructure funding associated with the construction of Trident submarine bases at Bangor, Washington, and Kings Bay, Georgia. In these two instances, DOD provided millions of dollars in funding for public infrastructure projects to the local communities surrounding Bangor and Kings Bay because DOD’s public infrastructure needs would exceed those already in place and serving the communities. During the 1970s, DOD decided to build submarine bases at Bangor and Kings Bay and determined that the subsequent growth would generate significant public infrastructure needs that the local communities could not support. In both cases, the expansion of the bases would require significant construction and result in the eventual influx of significant amounts of personnel to the surrounding communities for which the local governments’ public infrastructure was generally inadequate. Congress authorized the Secretary of Defense to provide financial assistance to the local communities for the costs of providing increased municipal services and facilities. For both programs, DOD assigned OEA with responsibility for program management. According to congressional documents, DOD reported that it provided approximately $55 million, in nominal dollars (i.e., not adjusted for inflation), to communities surrounding Bangor for infrastructure improvements in areas, such as water resources, schools, fire protection, parks, roads, law and justice, social and health services, sewers, and libraries. According to DOD documents, DOD reported providing approximately $48 million, in nominal dollars, to communities surrounding Kings Bay for infrastructure improvements similar to Bangor, such as utility systems, elementary schools, a city hall, a fire station, and various public vehicles. Additionally, in some instances, specific appropriations have been made by Congress to OEA’s budget for public infrastructure improvements to assist affected communities. For example, in 2011, Congress appropriated $300 million for use by OEA for transportation infrastructure improvements associated with medical facilities related to recommendations of the BRAC Commission. Some of the communities receiving funding were Montgomery County, Maryland, for the construction of a pedestrian and bicycle underpass near the Walter Reed National Military Medical Center and the City of San Antonio, Texas, for the construction of a safer highway interchange near Brooke Army Medical Center. In addition, also in 2011, Congress appropriated $500 million in funding for the construction, renovation, repair, or expansion of public schools located on military installations to address capacity or facility condition deficiencies. As implemented by OEA, these funds were available for local educational agencies operating such public schools. According to OEA officials, DOD’s position is that local communities should largely be responsible for obtaining funding for public infrastructure requirements related to DOD basing decisions. This funding can come from other, non-DOD federal programs, with DOD advocating that existing federal programs should be leveraged as much as possible. Along these lines, several federal agencies have existing programs that have funded public infrastructure improvements on Guam in recent years. For example, EPA, which assists Territories under its Environmental Protection Consolidated Grants program, provided Guam with almost $6.8 million in fiscal year 2012 to fund drinking water and wastewater system improvements. The Department of the Interior’s Office of Insular Affairs provided Guam with over $6 million in fiscal year 2013 in capital improvement grants to fund a variety of infrastructure needs. Table 1 shows examples of public infrastructure programs for which Guam has received funding from non-DOD federal programs in the last few years. In addition to obtaining funding through non-DOD federal programs, local communities can also raise their own funds for public infrastructure projects. In the case of Guam, two key challenges have been identified as affecting its ability to raise funds. Specifically, according to Government of Guam officials, limited government revenues and limited debt capacity due to its statutory debt limitation hinder its ability to finance its public infrastructure projects. First, Guam has faced an operating deficit over the past few years and current revenues are not sufficient to support operational requirements. The Governor of Guam told us that without a major increase in economic activity and the resulting increase in revenues, the administration will be unable to address additional public infrastructure requirements other than those necessary for basic operations and debt service requirements. Government of Guam officials explained that the major revenue challenges for the government are the inability of taxpayers to pay taxes, the inability of the government to access military bases to conduct random inspections to ensure military contractors and vendors are in compliance with Guam’s tax laws, and the large amount of DOD-controlled land on Guam that is not available for economic development. Two Department of the Interior reports identified that Guam’s operating revenues challenges are partly a result of poor tax At the time of those reports, the Department of the collection efforts.Interior estimated that persistent deficiencies in Guam’s tax collection process were resulting in lost tax revenues of at least $23.5 million each year. Guam officials told us that since those reports were issued they have taken steps to address the findings in the reports and improve their tax collection efforts but that the taxpayer’s inability to pay will always be a challenge. We have previously found that although communities near military growth locations can face growth-related challenges in the short term, such as challenges in providing additional infrastructure, they can expect to realize economic benefits in the long term, such as increased revenue. An increase in military and federal civilian employees on Guam stemming from the realignment may be a potential source of additional revenue. For instance, Guam receives federal income taxes paid by military and civilian employees of the U.S. government stationed in Guam. Under section 30 the Internal Revenue of the Organic Act of Guam and a related statute,Service reimburses Guam for the income taxes it collects from federal civilian and military personnel assigned to Guam. The Internal Revenue Service pays section 30 funding to Guam annually. The money represents the income tax paid by federal employees and military service members who work on Guam but not collected locally. This amounted to $52 million in 2010, and this amount is expected to increase with the realignment thereby providing the Government of Guam with increased revenue. However, Guam officials told us they were concerned that since the composition of the Marines to be relocated to Guam under the revised realignment plan would be mostly rotational, they would not be reimbursed for the income taxes since these personnel may be stationed on Guam for less than 6 months. In response to these concerns, DOD recently announced procedures to account for and reimburse to Guam income tax paid by all U.S. Marines—whether part of a rotational unit or permanently stationed on island. With DOD’s announcement, according to Government of Guam officials, Marines stationed in Guam will be included under section 30, regardless of how long they are on Guam, thereby providing the Government of Guam with additional revenue. The second challenge identified as affecting Guam’s ability to raise funds is its statutory debt limitation. The Government of Guam’s ability to borrow funds to help pay for public infrastructure projects and programs related to the realignment may be constrained because of a statutory debt limitation contained in the Organic Act of Guam, depending on the form and terms of the prospective debt. Section 11 of the act places a limitation on government borrowing, limiting Guam’s public indebtedness to no more than 10 percent of the aggregate tax valuation of property on Guam. However, not all government obligations are included in the debt ceiling. For instance, section 11 of the Organic Act notes that bonds or other obligations of the Government of Guam payable solely from revenues derived from any public improvement or undertaking shall not be considered public indebtedness as defined in the Organic Act of Guam. As such, they would not be counted towards the government’s statutory debt limitation. However, whether certain obligations fall into this exception and should not be included in the Government of Guam’s debt limit calculation has generally been a highly litigated issue and may be determined on a case-by-case basis by the Guam courts.Government of Guam has determined and decided on the form and terms of debt it plans to incur to help fund off-base projects and programs related to the realignment, it is unknown what effect this debt limitation provision will have on the ability of the Government of Guam to incur debt for the purposes of the realignment. Despite these challenges, the Government of Guam has been able to obtain funding through issuing bonds in the past. For example, in December 2011, the Government of Guam successfully issued $235 million in bonds to pay unpaid tax refunds and past due cost of living allowances to certain retired government employees. These bonds were financed from revenues generated from the island’s business privilege tax. Further, in April 2011, the Government of Guam successfully sold $90.7 million worth of bonds to construct a new Guam museum and for other projects that benefit Guam’s tourism industry, such as the restoration of a community center and bell tower and the construction of a historic monument and plaza to commemorate Ferdinand Magellan’s visit to Guam. These bonds were financed from revenues generated from Guam’s hotel occupancy tax. These successful bond offerings demonstrate that a market may exist among investors for the Government of Guam’s debt, which could be a potential source of funding for its necessary public infrastructure improvements. Further, Guam’s autonomous government agencies related to public infrastructure— Power, Waterworks, and Port Authorities—have the ability to issue bonds for infrastructure improvements. Bonds issued by autonomous agencies are often backed by the agencies’ own revenue streams, such as customer rates. Guam officials cautioned, however, there is a limit to how much they can raise rates on their customers to increase revenue, particularly since for some utilities, Guam already has relatively high rates when compared to other insular areas and Hawaii. Like other bonds issued by the Government of Guam, the determination of whether a bond issued by one of these agencies would count against the Government of Guam’s statutory debt ceiling also depends upon the form and terms of the debt and can be a highly litigated issue. DOD has requested funding from Congress for projects to improve Guam’s public infrastructure. However, the projects included in these budget requests were validated based on the 2006 realignment plan, and DOD has not revalidated public infrastructure requirements for Guam to reflect the revised realignment plan or differentiated between requirements to address long-standing conditions in Guam’s public infrastructure and those specifically related to additional capacity for the realignment. According to DOD, a revised list of Guam public infrastructure requirements and cost estimates based on the revised realignment plan that calls for over 11,000 less people coming to Guam than the previous plan will not be available until 2015 when DOD completes the supplemental EIS. Even so, DOD has requested over $400 million for Guam infrastructure projects in its budget requests for fiscal years 2012 through 2014. However, since these projects were originally validated on the basis of the 2006 realignment plan, it is uncertain to what extent these projects are necessary or necessary to the same extent given the significant reduction in forces associated with the revised realignment plan and the fact that the potential effect has not been revalidated. Congress has restricted the use of funds until further information is provided related to the realignment plan and imposed other restrictions on use of the funding. It is also unclear to what extent the projects specified in DOD’s budget requests are required to address additional capacity to accommodate the current realignment plan or to address long-standing deficiencies in Guam’s infrastructure because DOD has not clearly differentiated between these two types of requirements. Although a list of public infrastructure projects was developed for the 2006 realignment plan for approximately 17,600 people relocating to Guam, Joint Guam Program Office officials stated that a revised list of Guam public infrastructure requirements and cost estimates based on the current realignment plan for approximately 6,300 people will not be available until sometime in 2015 when DOD completes the ongoing supplemental EIS. In February 2010 after the original realignment plan was announced, the Deputy Secretary of Defense chaired a meeting of the Economic Adjustment Committee, the goal of which was to develop a Guam public infrastructure funding plan for the original realignment.According to OEA officials the Economic Adjustment Committee divided this task into a public infrastructure assessment team and a socioeconomic project assessment team. The public infrastructure team examined Guam’s water and wastewater system, port, solid waste, power system, and roads. The socioeconomic project team examined health care, education, cultural resources, emergency services, judicial services, and other public infrastructure throughout the island. Input to the assessments was initially provided by the Office of the Governor of Guam, working with the territory’s executive departments, who proposed specific projects within each infrastructure area for further consideration. Finally, a team comprised of officials from federal agencies with purview over one or more of the identified infrastructure areas validated the need, scope, and funding required for each public infrastructure project. Ultimately, the Economic Adjustment Committee developed a list of validated projects needed to prepare Guam for the original realignment plan and these projects were subsequently included in DOD’s budget requests (see table 2). The Economic Adjustment Committee considered other projects but did not include them on the validated list and, as a result, the projects were not included in DOD budget requests. DOD has requested over $400 million to fund Guam public infrastructure projects in DOD’s budget requests for fiscal years 2012 through 2014. Because OEA is the primary DOD office responsible for providing assistance to communities, regions, and states affected by significant DOD program changes, DOD included these projects in OEA’s budget requests. Table 2 provides additional details regarding the requests and associated infrastructure projects and justifications. In response to DOD’s requests for Guam public infrastructure funding, Congress has appropriated some funds, but it has placed limitations on the use of the funds. For example, in 2011, Congress appropriated $33 million to DOD, acting through the OEA, to assist the civilian population of However, this funding Guam in response to the military realignment.was subject to restrictions on the expenditure of funds for military and public infrastructure projects in Guam related to the realignment of Marine Corps forces from Okinawa to Guam. The National Defense Authorization Act for Fiscal Year 2013 contained similar restrictions, and in the Consolidated and Further Continuing Appropriations Act, 2013, Congress rescinded $21 million of the $33 million appropriated to DOD for fiscal year 2012 for Guam. OEA requested $139.4 million for public infrastructure projects on Guam for fiscal year 2013. Congress appropriated $243.4 million for OEA for fiscal year 2013,no authorization for use of the funding for public infrastructure projects on Guam. Consequently, according to an OEA official, because DOD did not have the authority to spend the funds for Guam, $119.4 million was reprogrammed in July 2013 to address shortcomings elsewhere in DOD. As a result, these funds are no longer available to DOD for Guam public infrastructure projects. These congressional actions have implications for DOD’s fiscal year 2014 budget request. For example, in its fiscal year 2014 budget request, DOD requested $273.3 million to fund improvements to the water treatment system on Guam. These funds were intended to fund the second phase of those improvements, as DOD’s expectation was that the $106.4 million requested for fiscal year 2013 would have funded the first phase. While DOD is awaiting congressional action on its fiscal year 2014 budget request, it appears that DOD’s request is in advance of need since there was no phase one of the water and wastewater treatment funding and the funds were reprogrammed. As of October 2014, bills pending in Congress varied on the extension of the restriction on the use of funds for the realignment of Marines to Guam, including the restriction related to public infrastructure. While the House but provided bill for the National Defense Authorization Act for Fiscal Year 2014 would repeal the restriction from the previous year, the version of the bill reported by the Senate Armed Services Committee included an extension of the restriction on use of funds to implement the realignment and public infrastructure funding. DOD has not revalidated the projects identified in its budget requests to reflect the smaller DOD population associated with the revised realignment plan. As a result, it is unclear to what extent these projects are still needed or are scoped appropriately, given the reduced numbers of Marines slated to relocate to Guam. OEA said that some of these projects, such as the artifact repository, should not be affected despite the change in the realignment plans because it is needed to fulfill federal historic preservation requirements and would be required under either plan. However, it is unclear if other projects, such as the water and wastewater improvements and the mental health facility, are still necessary or necessary to the same extent given the significant reduction in forces under the revised realignment plan and the as yet undetermined location of the main Marine Corps installation on the island. According to DOD officials, the projects initially validated by the Economic Adjustment Committee for the 2006 realignment plan and included in DOD budget requests will be reassessed based on the revised realignment plan as part of the supplemental EIS process to be completed in 2015. DOD also has not clearly differentiated between requirements to address long-standing conditions in Guam’s public infrastructure and those to address increased capacity to support the new realignment plan for most sectors. As a result, it is unclear to what extent the public infrastructure projects in DOD’s budget requests are needed to support the realignment. For example, one of the possible locations for constructing and operating the main Marine installation being considered under the ongoing supplemental EIS is Naval Base Guam in the southern part of the island. However, this base handles all of its own wastewater needs and gets its potable water from its own wells and the Fena Reservoir, thus not requiring DOD to rely on the public water and wastewater systems. If this location is chosen, it would raise questions about the funding DOD has requested for making improvements to the water and wastewater treatment plant that DOD had justified by citing the need for additional capacity to support the additional troops associated with the realignment. Similarly, DOD has not estimated the extent to which the mental health facility or school bus acquisition projects would actually be used by personnel associated with the new realignment, none of whom were on Guam in 2012 or will be on Guam in 2013 or 2014 even though DOD cited the additional capacity associated with the realignment as a basis for its budget request. For the electricity sector, we found that DOD has taken steps to differentiate between requirements related to the realignment and those to address long-standing conditions. In February 2013, DOD asked the Guam Power Authority to model what upgrades would be needed to meet the increased demand associated with three of the possible five locations for constructing and operating the main Marine Corps installation. The Power Authority provided DOD with the specific electric transmission and distribution improvements that would be needed and their estimated costs which ranged from $25 million to $35 million depending on the location. However, according to our discussions with Government of Guam and DOD officials, DOD has not asked for similar analyses from other affected Guam agencies or begun a comprehensive analysis across all public infrastructure sectors to differentiate between requirements to address existing conditions and what is needed specifically to address additional capacity for the realignment. The Joint Guam Program Office and Naval Facilities Engineering Command officials told us that they are currently conducting assessments to reexamine and revalidate the need, scope, and funding required for all utilities and infrastructure projects during DOD’s development of the supplemental EIS. However, they were uncertain regarding the degree to which the supplemental EIS would fully differentiate between identifying projects that address existing Guam conditions and additional capacity for DOD requirements. Office of Management and Budget guidance containing best practices for cost estimating in the context of capital programming, which includes planning and budgeting, suggests that it is a best practice to continuously update the cost estimating process, based on the latest information available, to keep estimates current, accurate, and valid. In addition, GAO’s Cost Estimating and Assessment Guide states that cost estimates should have all cost inputs checked to verify that they are as accurate as possible and that estimates should be While we acknowledge that updated to reflect changes in requirements.DOD has not completed the supplemental EIS and developed an updated list of public infrastructure project requirements, DOD is requesting funds for existing Guam public infrastructure projects in its budget requests that DOD has not revalidated in light of changes to its realignment plans. Moreover, DOD has not conducted a comprehensive analysis to differentiate between requirements to address long-standing Guam public infrastructure deficiencies and extra capacity to support the realignment. Without such an analysis, DOD will not have the information to identify accurately the costs directly attributable to the realignment and help justify its budget requests to Congress to help pay for the portion of the projects that are attributable to the extra capacity to support the realignment. Both Guam and DOD officials also agreed that developing this type of information would better determine what appropriate amount of Guam public infrastructure improvements DOD should fund and those Guam should fund. Without this information, DOD cannot fully inform Congress of what funding is actually needed to fund public infrastructure development to support the revised realignment plan. The cost estimate, DOD has used to support its budget requests for water and wastewater infrastructure projects on Guam, did not fully adhere to best practices for developing a reliable cost estimate, which is at the core of successfully managing a project within cost and affordability guidelines. During the development of the EIS, DOD, the Guam Waterworks Authority, and the EPA cooperated to identify and prioritize water and wastewater projects island-wide that were necessary to support the 2006 Marine Corps realignment plan. As part of this effort, DOD (as the EIS sponsor) paid for and EPA (as an EIS cooperating agency) managed a contract with an environmental firm for the development of a refined Guam water and wastewater infrastructure cost estimate. The contractor updated the original 2010 estimate on several occasions with the latest being in September 2012. That update indicates that approximately $1.3 billion in improvements are needed for Guam’s water and wastewater infrastructure to address existing deficiencies, including out-of-compliance facilities, as well as requirements to support the Marine Corps realignment. DOD used this cost estimate to support its fiscal year 2013 and 2014 budget requests for Guam water and wastewater improvements. However, when reviewing this cost estimate, we were unable to determine which projects within the $1.3 billion estimate were specifically for capacity increases due to the military realignment and associated with the fiscal years 2013 and 2014 budget requests. In assessing the estimate against best practices, we determined that this estimate is not reliable because it does not include all relevant costs, is based on limited data, and, as documented, lacks many of the key characteristics to be considered a reliable cost estimate. In addition, we found no evidence that actual costs were incorporated into the estimate and that risk and uncertainty were adequately assessed in the estimate. Office of Management and Budget (OMB) guidance containing best practices for cost estimating in the context of capital programming notes that a disciplined cost estimating process provides greater information management support, more accurate and timely cost estimates and improved risk assessments that will help to increase the credibility of capital programming cost estimates. Among other things, OMB’s guidance states that credible cost estimates are vital for sound management decision making and for any program or capital project to succeed. It further notes that early emphasis on cost estimating during the planning phase is critical to successful life cycle management of a program or project. Without such an estimate, agencies are at increased risk of experiencing cost overruns, missed deadlines, and performance shortfalls. GAO-09-3SP. managing capital program costs. Such an estimate provides the basis for informed investment decision making, realistic budget formulation, and accountability for results. Furthermore, the guide indicates that these best practices can be organized into the four characteristics of a reliable cost estimate that management can use for making informed program and budget decisions. Specifically, an estimate is considered comprehensive when it accounts for all possible costs associated with a program, is structured in sufficient detail to ensure that costs are neither omitted nor double counted, and documents all cost- influencing assumptions; well-documented when supporting documentation explains the process, sources, and methods used to create the estimate, contains the underlying data used to develop the estimate, and is adequately reviewed and approved by management; accurate when it is not overly conservative or optimistic, is based on an assessment of the costs most likely to be incurred, and is regularly updated so that it always reflects the current status of the program; and credible when any limitations of the analysis because of uncertainty or sensitivity surrounding data or assumptions are discussed, the estimate’s results are cross-checked, and an independent cost estimate is conducted by a group outside the acquiring organization to determine whether other estimating methods produce similar results. Each of these four characteristics consists of several best practices (see appendix II for a summary of these practices and our Cost Estimating and Assessment Guide for more details on the individual best practices). We evaluated the estimate against each of the individual best practices, assigning a score on a scale of 1 to 5 to indicate the degree to which the cost estimate met each best practice. Not Met (1 point)—DOD provided no evidence that satisfies any portion of the best practice criterion. Minimally Met (2 points)—DOD provided evidence that satisfies a small portion of the best practice criterion. Partially Met (3 points)—DOD provided evidence that satisfies about half of the best practice criterion. Substantially Met (4 points)—DOD provided evidence that satisfies a large portion of the best practice criterion. Fully Met (5 points)—DOD provided complete evidence that satisfies the entire best practice criterion. We then determined the overall assessment rating for each of the four characteristics by totaling the scores assigned to the individual best practices within each characteristic to derive an average score for that characteristic. The average scores fell into the following ranges: Not Met = 0 to 1.4 Minimally Met = 1.5 to 2.4 Partially Met = 2.5 to 3.4 Substantially Met = 3.5 to 4.4 Fully Met = 4.5 to 5.0. Best practices assessed as not applicable were not given a score and were not included in our calculation of the overall assessment. To be considered reliable, an estimate must substantially or fully meet all four characteristics. We found that the water and wastewater cost estimate for Guam did not meet one of the characteristics and only minimally met the remaining three. As a result, we determined that the estimate is not reliable. Table 3 provides a summary of the results of GAO’s assessment of the Guam water and wastewater cost estimate based on cost estimating best practices. See appendix II for our complete analysis of the individual best practices for each of the characteristics. OEA officials stated that the intent of the estimate was to develop a preliminary rough-order-of-magnitude estimate in order to provide enough information to get the budget process started for funding urgent Guam water and wastewater improvements. The officials further stated that they believe the estimate was sufficient for this purpose. In addition, according to OEA and EPA officials, the cost estimate was not intended to represent a “budget quality” life cycle cost estimate given the complexity of the project and lack of documentation submitted by the Government of Guam in developing the estimate. Nonetheless, DOD used this cost estimate to support its fiscal year 2013 and 2014 budget requests for funding Guam water and wastewater improvements. In the future, as DOD updates its list of Guam public infrastructure project requirements when the supplemental EIS is completed and develops the associated cost estimates, it has the opportunity to ensure that the estimates it is using more completely incorporate cost estimating best practices, thereby improving the quality of the cost estimates and making them easier to defend in future budgets and decision making. According to documentation attached to DOD’s fiscal year 2013 budget, DOD emphasized that it cannot continue the practice of starting programs that prove to be unaffordable and according to our Cost Estimating and Assessment Guide, whether or not a program is affordable depends a great deal on the quality of its cost estimate. Without a reliable estimate that is updated in response to program changes, Congress is hindered in its ability to assess budgets and affordability. Also, without complete cost estimates for the potential total financial commitment for operating and maintaining Guam’s water and wastewater systems, Congress will not have needed information to weigh the proposed cost of the Marine realignment plans against other demands for resources. Until DOD has the results of the supplemental EIS and issues a record of decision, it is understandable that DOD will not be able to finalize comprehensive public infrastructure requirements and cost estimates for its planned realignment of Marines and dependents from Japan to Guam. Nevertheless, in the interim, DOD through OEA has continued to request funds for Guam public infrastructure projects without updating its requirements based on the revised realignment plan that calls for a much smaller Marine Corps presence in Guam than previously planned. Furthermore, DOD and the Navy’s JGPO have not clearly identified which Guam public infrastructure requirements and costs directly support the additional capacity needed for the realignment and which address current deficiencies. In addition, OEA did not fully incorporate cost estimating best practices in developing its cost estimate for Guam’s water and wastewater infrastructure projects that was used to support previous budget requests. Our analysis of the cost estimate and its updates found that the estimate satisfies a small portion of the best practice criteria and thus is not a reliable estimate to support budget requests. Further, two important points emerged: (1) the true cost of this water and wastewater project is not known, and (2) it is unclear whether all of the underlying improvements are needed to support the realignment. Actions such as revalidating the original list of infrastructure projects, conducting analyses that differentiate existing Guam public infrastructure deficiencies from additional capacity needed to support the realignment, and more fully incorporating cost estimating practices to help DOD identify the costs directly attributable to the realignment would provide DOD with the information it needs to support its Guam budget requests to Congress. Without reliable cost estimates developed for the realignment plan in a manner consistent with GAO’s cost estimating guide, DOD will be hampered in achieving its affordability goal of not starting a program without firm cost goals in place and may be seeking funds for public infrastructure projects that may no longer be needed. Furthermore, the credibility of DOD’s estimate will be questionable, and Congress cannot be reasonably assured that it is sufficiently informed regarding the funding that may be needed for Guam public infrastructure projects. To provide DOD and Congress with sufficient information regarding the requirements and costs associated with DOD’s current Guam realignment plans and the public infrastructure necessary to support that realignment, we recommend that the Secretary of Defense direct the Department of the Navy’s JGPO in concert with OEA take the following three actions: Revalidate the need and scope of Guam public infrastructure projects included in DOD budget requests based on the reduced number of Marines and dependents DOD intends to relocate to Guam. Conduct a comprehensive analysis across all applicable public infrastructure sectors to determine what infrastructure requirements and costs are needed to address existing deficiencies in Guam’s infrastructure and what requirements and costs are needed to directly support the additional capacity needed to support the realignment, and As future cost estimates for Guam public infrastructure projects are developed, fully incorporate the best practices identified by GAO for developing high quality cost estimates. We provided a draft of this report to DOD, the Department of the Interior, EPA, and the Office of the Governor of Guam for review and comment. In written comments, which are reprinted in their entirety in appendix III, DOD partially concurred with our three recommendations. DOD, the Department of the Interior, EPA, and the Office of the Governor of Guam provided technical comments that have been incorporated into this report as appropriate. DOD partially concurred with our first recommendation to revalidate the need and scope of Guam public infrastructure projects included in DOD budget requests. DOD concurred that the need and scope of additional, realignment-related Guam public infrastructure projects will be revalidated as necessary based on the results of the analysis in the ongoing supplemental EIS. However, for the Guam wastewater public infrastructure project, DOD commented that the requested funding is not contingent upon the size of the realignment but rather represents funding for improvements to address noncompliance with EPA regulations. As a result, DOD concluded that the requests associated with the wastewater treatment facilities do not warrant realignment-related revalidation. We disagree. First, while DOD’s justifications for the wastewater treatment funding cite the need for remedies and residents’ current needs, the justifications also state that the funding and project is required to support growth resulting from the military realignment. Given that the size of the realignment has been reduced significantly, a revalidation of the wastewater project remains warranted. Second, without a revalidation of the wastewater project, it will continue to be unclear to what extent the requested funds for the project are still necessary or necessary to the same extent given the significant reduction in forces under the revised realignment plan and the as yet undetermined location of the main Marine Corps installation. Specifically, as discussed in the report, a possible location for the main Marine installation is Naval Base Guam, which handles its own wastewater needs and does not require DOD to rely on the public wastewater system. If this location is chosen, DOD would appear to no longer have a basis for its cited need for additional wastewater capacity to support the realignment as part of its budget request justifications. DOD stated that it partially concurred with our second recommendation to conduct a comprehensive analysis across all applicable public infrastructure sectors to determine what infrastructure requirements and costs are needed to address long-standing deficiencies in Guam’s infrastructure and which are needed to directly support the realignment. DOD noted that a determination of realignment-related infrastructure requirements and costs is an anticipated outcome of the supplemental EIS. DOD’s comments, however, do not address whether it plans to clearly differentiate between those infrastructure requirements and costs needed to address existing deficiencies in Guam’s infrastructure and those needed to directly support the additional capacity associated with the realignment, as we specifically recommended. Doing so is important, because as explained in the report, clearly differentiating between existing public infrastructure deficiencies and any additional capacity needed to support the realignment would help DOD more accurately identify the costs directly attributable to the realignment. DOD’s analysis would then provide congressional decision makers with information they need to appropriately fund requests for public infrastructure projects on Guam. DOD partially concurred with our third recommendation to fully incorporate the best practices identified by GAO for developing high quality cost estimates, as future cost estimates for Guam public infrastructure projects are developed. In response to this recommendation, DOD stated that future realignment-related cost estimates and budget submissions will be developed in accordance with DOD’s Financial Management Regulation and that final engineering cost estimates for specific projects will be developed in the normal course of executing the fiscal year 2014 program. While budget submissions must conform to DOD guidance, OMB guidance and our cost estimating and assessment guide, which is a compilation of cost estimating best practices from across industry and government, confirm that cost estimates should conform to best practices and follow certain specific steps to ensure that they are reliable and credible. Development of reliable and credible cost estimates is important, whether as part of budget submissions or in advance of those submissions. As discussed in this report, our analysis of DOD’s cost estimate for Guam’s largest public infrastructure project—the water and wastewater treatment facility— demonstrates that weaknesses exist in DOD’s cost estimating practices that, if left unaddressed, increase the likelihood that costs will increase. By not following best practices in preparing its cost estimate, DOD cannot ensure that the estimate is reliable and credible. As DOD continues to provide information to Congress regarding the realignment, we believe DOD has the opportunity to improve the quality of its estimates by applying cost estimating best practices in its approach. To better inform the budget decisionmaking process of the likely costs, affordability, and scheduling of funding needed to support the Guam realignment, DOD should take every available opportunity to employ best practices and provide Congress with the highest quality cost estimates possible. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; the Commandant of the Marine Corps; the Director, Office of Management and Budget; and appropriate organizations. In addition, this report will be available at no charge on our website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512- 5741 or ayersj@gao.gov. Contact points for our Offices of Congressional Affairs and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To describe the existing condition of the public infrastructure on Guam, we interviewed and collected information from various Department of Defense (DOD) officials, including those in the Office of the Under Secretary of Defense for Policy; Office of Economic Adjustment (OEA), Office of the Under Secretary of Defense for Acquisitions, Technology, and Logistics; Naval Facilities Engineering Command; Joint Region Marianas, Department of the Navy; and Joint Guam Program Office, Department of the Navy. We also interviewed other federal officials from the following offices and agencies assisting Guam in preparing for the realignment: Office of Management and Budget Council on Environmental Quality Department of the Interior’s Office of Insular Affairs Environmental Protection Agency, Region IX Department of Health and Human Services, Region IX. We conducted a site visit to Guam in April 2013, where we met with officials in Guam’s Military Buildup Office. the Governor of Guam, the Speaker and other members of the Guam Legislature, the Guam Auditor General, and We also interviewed other Guam officials representing the following public infrastructure sectors on Guam likely to be affected by the realignment: Guam Waterworks Authority Guam Power Authority Consolidated Commission on Utilities Guam Department of Public Works Port Authority of Guam Guam Department of Public Health and Social Services Guam Environmental Protection Agency Guam Fire Department Guam Police Department Guam Department of Education Guam State Historic Preservation Office. During our site visit to Guam, we toured Andersen Air Force Base and some of the locations cited in documents related to the supplemental Environmental Impact Statement (EIS) as possible locations for the establishment of a main Marine Corps installation and Marine Corps live- fire training range complex on Guam. We also visited the Northern District Wastewater Plant and Port of Guam which had been cited by DOD and Government of Guam officials as two of the most critical infrastructure sectors requiring improvements. For the purposes of our review, public infrastructure is defined as including the utilities, methods of transportation, equipment, or facilities under the control of a public entity, such as a power authority, or local government for use by the public to support the realignment of forces and dependents. The public infrastructure sectors covered by our review were chosen based on inclusion in (1) prior Government of Guam and DOD project lists developed for the 2006 Roadmap realignment plan, (2) DOD budget requests, (3) prior GAO reports on the realignment of U.S. forces to Guam, and (4) federal agency inspector general reports, as well as those sectors identified during our interviews by Government of Guam and DOD officials. The following eight sectors are included in our review: electric power, water and wastewater, port, solid waste, public health, law enforcement, fire department, and education infrastructure. The highways and other roads sector is not included in our analysis because Government of Guam and DOD officials did not identify it as a sector likely to be adversely affected by the realignment since existing programs and agencies, such as the Defense Access Roads and the Department of Transportation’s Federal Highway Administration, are currently allocating funds for road and highway improvements on Guam. For our first objective regarding the existing condition of Guam’s public infrastructure, we reviewed the original EIS, a DOD engineering review, technical studies, and business case analyses and conducted interviews. In addition, we reviewed inspector general reports prepared by the Department of the Interior regarding the condition of specific sectors of Guam’s public infrastructure. We reviewed these reports and determined that their methodologies were sufficiently reliable for our purposes. We corroborated the information contained in the inspector general reports by interviewing Guam officials from the relevant public infrastructure sectors to determine the extent to which the findings of the various reports accurately portrayed the condition of Guam’s public infrastructure and remained valid. We also reviewed the socioeconomic project needs assessment worksheets developed by Guam and provided to the Economic Adjustment Committee in 2010 as part of the Economic Adjustment Committee’s efforts to develop a list of public infrastructure requirements for the original realignment plan. Additionally, we reviewed the completed supplemental EIS questionnaires administered by DOD to Guam to obtain updated information regarding the state of Guam’s public infrastructure and potential impact of the revised realignment plan. For our second objective to describe the types of assistance DOD generally has provided to defense-affected communities and the other types of funding sources that have been used to fund Guam public infrastructure projects, we interviewed OEA officials to identify the most relevant historical examples similar to Guam and reviewed past congressional hearings, DOD documents, and fiscal impact analyses to determine previous instances of where DOD provided public infrastructure funding to communities. To identify examples of non-DOD, federal programs from which Guam has received public infrastructure funding in the past, we interviewed OEA, Department of the Interior, and Government of Guam officials and reviewed Guam’s Single Audit report and Summary of Schedule of Expenditures of Federal Awards. To determine Guam’s potential for raising additional revenue to fund infrastructure projects, we interviewed Guam officials and reviewed the Government of Guam’s 2014 executive budget request and long-term debt abstract. For our third objective to assess DOD’s efforts to revalidate its public infrastructure requirements under the revised realignment plan and differentiate between requirements needed to address Guam’s existing public infrastructure deficiencies and those related to the realignment, we reviewed information on DOD and the Government of Guam’s planning activities related to public infrastructure improvements needed to support the revised realignment plan and compared this information to previous public infrastructure lists developed by the Government of Guam, DOD, and other federal entities to support the 2006 Roadmap realignment plan. We interviewed DOD officials regarding the extent to which DOD was revalidating and differentiating between requirements as part of the current supplemental EIS and also interviewed Government of Guam officials from all the infrastructure sectors we reviewed to determine the extent to which they had been contacted by DOD to update or differentiate between requirements. We evaluated DOD’s efforts with criteria established in our Cost Estimating and Assessment Guide: Best and OMB Practices for Developing and Managing Capital Program Costsguidance containing best practices for capital programming. To determine how much DOD has requested to support public infrastructure projects on Guam, we reviewed DOD budget materials and interviewed OEA officials to determine what funding DOD has requested to support public infrastructure projects on Guam related to the realignment, as well as statutory restrictions on the use of these funds for these types of projects. GAO-09-3SP. characteristics representing practices that help ensure that a cost estimate is (1) comprehensive, (2) well documented, (3) accurate, and (4) credible. Each of these four characteristics consists of several best practices (see appendix II for a summary of these practices and our Cost Estimating and Assessment Guide for more details on the individual best practices). We evaluated the estimate against each of the individual best practices, assigning a score on a scale of 1 to 5 to indicate the degree to which the cost estimate met each best practice. Not Met (1 point)— DOD provided no evidence that satisfies any portion of the best practice criterion. Minimally Met (2 points)—DOD provided evidence that satisfies a small portion of the best practice criterion. Partially Met (3 points)—DOD provided evidence that satisfies about half of the best practice criterion. Substantially Met (4 points)—DOD provided evidence that satisfies a large portion of the best practice criterion. Fully Met (5 points)—DOD provided complete evidence that satisfies the entire best practice criterion. We determined the overall assessment rating for each of the four characteristics by totaling the scores assigned to the individual best practices within each characteristic to derive an average score for that characteristic. The average scores fell into the following ranges: Not Met = 0 to 1.4 Minimally Met = 1.5 to 2.4 Partially Met = 2.5 to 3.4 Substantially Met = 3.5 to 4.4 Fully Met = 4.5 to 5.0. Best practices assessed as not applicable were not given a score and were not included in our calculation of the overall assessment.held detailed discussions with EPA and DOD officials and reviewed program documentation to identify key factors that could affect the potential total costs. We also met with these officials to discuss the results of our evaluation. We also To determine the reliability of the numerical data provided to us by DOD, other federal organizations, and by Government of Guam officials, we collected information on how the data was collected, managed, and used through interviews with relevant officials. By assessing this information against GAO data quality standards, we determined that the data presented in our findings were sufficiently reliable for the purposes of this report. We conducted this performance audit from February 2013 through December 2013, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides the detailed results of our analysis of the cost estimate that was used to support the Department of Defense’s (DOD) budget requests for funding to improve Guam’s water and wastewater systems. Specifically, we assessed the extent to which the cost estimate followed the best practices of a reliable cost estimate as documented in our 2009 Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs. We reviewed the cost estimate for the water and wastewater system and assessed each individual best practice that comprises each of the four characteristics of a reliable cost estimate as summarized in the report’s body and assigned a score on a scale of 1 to 5 to indicate the degree to which the estimate met each best practice. Not Met (1 point)—DOD provided no evidence that satisfies any portion of the best practice criterion. Minimally Met (2 points)—DOD provided evidence that satisfies a small portion of the best practice criterion. Partially Met (3 points)—DOD provided evidence that satisfies about half of the best practice criterion. Substantially Met (4 points)—DOD provided evidence that satisfies a large portion of the best practice criterion. Fully Met (5 points)—DOD provided complete evidence that satisfies the entire best practice criterion. We determined the overall assessment rating for each characteristic by totaling the scores assigned to the individual best practices within each characteristic to derive an average score for that characteristic. The average scores fell into the following ranges: Not Met = 0 to 1.4 Minimally Met = 1.5 to 2.4 Partially Met = 2.5 to 3.4 Substantially Met = 3.5 to 4.4 Fully Met = 4.5 to 5.0. Best practices assessed as not applicable were not given a score and were not included in our calculation of the overall assessment.provides the detailed results of our analysis of the cost estimate. In addition to the contact named above, Laura Durland, Assistant Director; Shawn Arbogast, Remmie Arnold, Grace Coleman, Tisha Derricotte, Adam Hatton, Jim Manzo, Karen Richey, Ophelia Robinson, Michael Shaughnessy, and Amie Steele made key contributions to this report. Defense Management: More Reliable Cost Estimates and Further Planning Needed to Inform the Marine Corps Realignment Initiatives in the Pacific.GAO-13-360. Washington, D.C.: June 11, 2013. Force Structure: Improved Cost Information and Analysis Needed to Guide Overseas Military Posture Decisions. GAO-12-711. Washington, D.C.: June 6, 2012. Military Buildup on Guam: Costs and Challenges in Meeting Construction Timelines. GAO-11-459R. Washington, D.C.: June 27, 2011. Defense Management: Comprehensive Cost Information and Analysis of Alternatives Needed to Assess Military Posture in Asia. GAO-11-316. Washington, D.C.: May 25, 2011. Defense Infrastructure: The Navy Needs Better Documentation to Support Its Proposed Military Treatment Facilities on Guam. GAO-11-206. Washington, D.C.: April 5, 2011. Defense Infrastructure: Guam Needs Timely Information from DOD to Meet Challenges in Planning and Financing Off-Base Projects and Programs to Support a Larger Military Presence. GAO-10-90R. Washington, D.C.: November 13, 2009. Defense Infrastructure: DOD Needs to Provide Updated Labor Requirements to Help Guam Adequately Develop Its Labor Force for the Military Buildup. GAO-10-72. Washington, D.C.: October 14, 2009. Defense Infrastructure: Planning Challenges Could Increase Risks for DOD in Providing Utility Services When Needed to Support the Military Buildup on Guam. GAO-09-653. Washington, D.C.: June 30, 2009. Defense Infrastructure: High-Level Leadership Needed to Help Guam Address Challenges Caused by DOD-Related Growth. GAO-09-500R. Washington, D.C.: April 9, 2009. Defense Infrastructure: Opportunity to Improve the Timeliness of Future Overseas Planning Reports and Factors Affecting the Master Planning Effort for the Military Buildup on Guam. GAO-08-1005. Washington, D.C.: September 17, 2008. Defense Infrastructure: High-Level Leadership Needed to Help Communities Address Challenges Caused by DOD-Related Growth. GAO-08-665. Washington, D.C.: June 17, 2008. Defense Infrastructure: Planning Efforts for the Proposed Military Buildup on Guam Are in Their Initial Stages, with Many Challenges Yet to Be Addressed. GAO-08-722T. Washington, D.C.: May 1, 2008. Defense Infrastructure: Overseas Master Plans Are Improving, but DOD Needs to Provide Congress Additional Information about the Military Buildup on Guam. GAO-07-1015. Washington, D.C.: September 12, 2007. U.S. Insular Areas: Economic, Fiscal, and Financial Accountability Challenges. GAO-07-119. Washington, D.C.: Dec. 12, 2006.
In 2006, the United States and Japan planned to relocate 17,600 U.S. Marines and dependents from Japan to Guam. However, in 2012, representatives from the countries developed a revised plan under which 6,300 Marines and dependents would relocate to Guam. The Conference Report accompanying the National Defense Authorization Act for Fiscal Year 2013 mandated that GAO evaluate what Guam public infrastructure projects are needed to support DOD's plans. This report (1) describes Guam's public infrastructure; (2) describes the types of assistance DOD generally provides and other funding sources that have been used to fund Guam projects; (3) assesses DOD's efforts to revalidate Guam projects under the revised realignment plan; and (4) assesses the cost estimate for Guam's public water and wastewater infrastructure improvements used to support DOD budget requests. To address these objectives, GAO reviewed policies, technical studies, and budget requests. GAO also interviewed DOD and other relevant federal officials as well as visited Guam and met with Guam officials. Some investments have been made to improve Guam's public infrastructure in recent years, but many deficiencies and regulatory compliance issues continue to exist. The reliability, capacity, and age of much of the public infrastructure--especially the island's utilities--indicate a need for additional upgrades to be able to meet current and future demands related to the realignment. Further, some infrastructure sectors, such as water and wastewater, face issues complying with federal regulations. Other sectors, such as the fire and police departments, are experiencing staffing and other shortages that affect their ability to serve Guam's current population. The majority of the Department of Defense's (DOD) support to defense-affected communities has been historically to provide technical assistance and support community planning and coordination efforts. However, in a few instances DOD has provided public infrastructure funding to communities where proposed basing decisions would generate significant public infrastructure needs that the communities could not support. Generally, DOD's position is that communities should be largely responsible for obtaining funding for public infrastructure requirements related to DOD basing decisions. This funding can come from other federal programs or communities can raise the funds on their own. In the case of Guam, however, some challenges related to limited government revenues and debt capacity has been identified as affecting its ability to do so. Despite the reduction of Marines and dependents relocating to Guam, DOD has not yet revalidated the public infrastructure requirements based on the revised realignment plan or differentiated between requirements needed to address long-standing conditions and those related to the realignment. This revalidation is not expected to be completed until 2015. Even so, DOD has requested over $400 million for Guam public infrastructure projects in its budget requests since fiscal year 2012. It is unclear if all of these projects are necessary to the same extent given the reduction in forces. For example, if DOD decides to locate the Marines on the naval base that handles all of its own water/wastewater needs, public water/wastewater improvements would not be needed to support the Marines. Congress has placed limitations on the use of funding, in part until certain information is provided related to the realignment. Without revalidating and differentiating between requirements, DOD cannot clearly identify what Guam public infrastructure requirements are needed to directly support the realignment. The $1.3 billion cost estimate for improvements to Guam's water and wastewater systems that DOD has used to support budget requests for fiscal years 2013 and 2014 is not reliable. GAO assessed that the estimate minimally met the best practice criteria for three of the four key characteristics--comprehensive, well documented, and accurate--for a reliable cost estimate as identified in the GAO Cost Estimating and Assessment Guide and did not satisfy best practice criteria for the fourth characteristic of being credible. GAO determined that officials adhered to some best practices for a reliable estimate but did not, for example, include all relevant costs, sufficiently explain why certain assumptions and adjustments were made, incorporate any actual costs or inflation adjustments, or adequately address risk and uncertainty. GAO recommends that DOD take actions to revalidate public infrastructure needs on Guam based on the revised realignment size and ensure best practices are used to develop future cost estimates. DOD partially concurred with GAO's recommendations and identified future plans. However, GAO believes further opportunities exist as discussed in the report.
Registered nurses are responsible for a large portion of the health care provided in this country. RNs make up the largest group of health care providers, and, historically, have worked predominantly in hospitals; in 2000, 59.1 percent of RNs were employed in hospital settings. A smaller number of RNs work in other settings such as ambulatory care, home health care, and nursing homes. Their responsibilities may include providing direct patient care in a hospital or a home health care setting, managing and directing complex nursing care in an intensive care unit, or supervising the provision of long-term care in a nursing home. Individuals usually select one of three ways to become an RN—through a 2-year associate degree, 3-year diploma, or 4-year baccalaureate degree program. Once they have completed their education, RNs are subject to state licensing requirements. The U.S. healthcare system has changed significantly over the past 2 decades, affecting the environment in which nurses provide care. Advances in technology and greater emphasis on cost-effectiveness have led to changes in the structure, organization, and delivery of health care services. While hospitals traditionally were the primary providers of acute care, advances in technology, along with cost controls, shifted care from traditional inpatient settings to ambulatory or community-based settings, nursing facilities, or home health care settings. The number of hospital beds staffed declined as did the patient lengths of stay. While the number of hospital admissions declined from the mid-1980s to the mid-1990s, they increased between 1995 and 1999. At the same time, the overall acuity level of the patients increased as the conditions of those patients remaining in hospitals made them too medically complex to be cared for in another setting. The transfer of less acute patients to nursing homes and community-based care settings created additional job opportunities and increased demand for nurses. Current evidence suggests emerging shortages of nurses available or willing to fill some vacant positions in hospitals, nursing homes, and home care. Some localities are experiencing greater difficulty than others. National data are not adequate to describe the nature and extent of these potential nurse workforce shortages, nor are data sufficiently sensitive or current to allow a comparison of the adequacy of the nurse workforce size across states, specialties, or provider types. However, total employment of RNs per capita and the national unemployment rate for RNs have declined, and providers from around the country are reporting growing difficulty recruiting and retaining the number of nurses needed in a range of settings. Another indicator that suggests the emergence of shortages is a rise in recent public sector efforts related to nurse workforce issues in many states. The national unemployment rate for RNs is at its lowest level in more than a decade, continuing to decline from 1.5 percent in 1997 to 1.0 percent in 2000. At the same time, total employment of RNs per capita declined 2 percent between 1996 and 2000, reversing steady increases since 1980. Between 1980 and 1996, the number of employed RNs per capita nationwide increased by 44 percent. At the state level, changes in per capita nurse employment from 1996 to 2000 varied widely, from a 16.2 percent increase in Louisiana to a 19.5 percent decrease in Alaska. (See appendix I.) Overall a decline in per capita nurse employment occurred in 26 states and the District of Columbia between 1996 and 2000. Declining RN employment per capita may be an indicator of a potential shortage. It is an imprecise measure, however, because it does not account for changes in care needs of the population or how many nurses relative to other personnel providers wish to use to meet those needs. Moreover, total employment includes not only nurses engaged in clinical or patient care activities but also those in administrative and other nondirect care positions. Data on how much nurse employment may have shifted between direct care and other positions are not available. Recent studies suggest that hospitals and other health care providers in many areas of the country are experiencing greater difficulty in recruiting RNs. For example, a recent survey in Maryland conducted by the Association of Maryland Hospitals and Health Systems reported a statewide average vacancy rate for hospitals of 14.7 percent in 2000, up from 3.3 percent in 1997. The association reported that the last time vacancy rates were at this level was during the late 1980s, during the last reported nurse shortage. A survey of providers in Vermont found that hospitals had an RN vacancy rate of 7.8 percent in 2001, up from 4.8 percent in 2000 and 1.2 percent in 1996. For 2000, California reported an average RN vacancy rate of 20 percent, and for 2001, Florida reported nearly 16 percent and Nevada reported an average rate of 13 percent. Concerns about retaining nurses have also become more widespread. A recent survey reported that the national turnover rate among hospital staff nurses was 15 percent in 1999, up from 12 percent in 1996. Another industry survey showed turnover rates for overall hospital nursing department staff rising from 11.7 percent in 1998 to 26.2 percent in 2000.Nursing home and home health care industry surveys indicate that nurse turnover is an issue for them as well. In 1997, an American Health Care Association survey of 13 nursing home chains identified a 51-percent turnover rate for RNs and LPNs. A 2000 national survey of home health care agencies reported a 21-percent turnover rate for RNs. Increased attention by state governments is another indicator of concern about nurse workforce problems. According to the National Conference of State Legislatures, as of June 2001, legislation to address nurse shortage issues had been introduced in 15 states, and legislation to restrict the use of mandatory overtime for nurses in hospitals and other health care facilities had been introduced in 10 states. A variety of nurse workforce task forces and commissions have recently been established as well. For example, in May 2000, legislation in Maryland created the Statewide Commission on the Crisis in Nursing to determine the current extent and long-term implications of the growing shortage of nurses in the state. Available data on supply and demand for RNs are not adequate to determine the magnitude of any current imbalance between the two with any degree of precision. Both the demand for and supply of RNs are influenced by many factors. Demand for RNs not only depends on the care needs of the population, but also on how providers—hospitals, nursing homes, clinics, and others—decide to use nurses in delivering care. Providers have changed staffing patterns in the past, employing fewer or more nurses relative to other workers such as nurse aides. For example, following the introduction of the Medicare Prospective Payment System (PPS), hospitals increased the share of RNs in their workforces. However, in the early 1990s, in an effort to contain costs, acute care facilities restructured and redesigned staffing patterns, introducing more non-RN caregivers and reducing the percentage of RNs. While the number of RNs employed by hospitals remained relatively unchanged from 1995 to1997, hospitals reported significant growth in RN employment in 1998 and 1999. Supply depends on the size of the pool of qualified persons and the share of them willing to work. Current participation by licensed nurses in the work force is relatively high. Nationally, 81.7 percent of licensed RNs were employed in nursing in 2000. Although this represents a slight decline from the high of 82.7 percent reported in 1992 and 1996, this rate of workforce participation remains higher than the 76.6 to 80.0 percent rates reported in the 1980s. Moreover, some RNs are employed in nonclinical settings, such as insurance companies, reducing the number of nurses available to provide direct patient care. Current problems with the recruitment and retention of nurses are related to multiple factors. The nurse workforce is aging, and fewer new nurses are entering the profession to replace those who are retiring or leaving. Furthermore, nurses report unhappiness with many aspects of the work environment including staffing levels, heavy workloads, increased use of overtime, lack of sufficient support staff, and adequate wages. In many cases this growing dissatisfaction is affecting their decisions to remain in nursing. The decline in younger people, predominantly women, choosing nursing as a career has resulted in a steadily aging RN workforce. Over the last 2 decades, as opportunities for women outside of nursing have expanded the number of young women entering the RN workforce has declined. A recent study reported that women graduating from high school in the 1990s were 35 percent less likely to become RNs than women who graduated in the 1970s. Reductions in nursing program enrollments within the last decade attest to this narrowing pipeline. According to a 1999 Nursing Executive Center Report, between 1993 and 1996, enrollment in diploma programs dropped 42 percent and enrollment in associate degree programs declined 11 percent. Furthermore, between 1995 and 1998, enrollment in baccalaureate programs declined 19 percent, and enrollment in master’s programs decreased 4 percent. The number of individuals passing the national RN licensing exam declined from 97,679 in 1996 to 74,787 in 2000, a decline of 23 percent. The large numbers of RNs that entered the labor force in the 1970s are now over the age of 40 and are not being replenished by younger RNs. Between 1983 and 1998, the number of RNs in the workforce under 30 fell by 41 percent, compared to only a 1-percent decline in the number under age 30 in the rest of the U.S. workforce. Over the past 2 decades, the nurse workforce’s average age has climbed steadily. While over half of all RNs were reported to be under age 40 in 1980, fewer than one in three were younger than 40 in 2000. As shown in figure 1, the age distribution of RNs has shifted dramatically upward. The percent of nurses under age 30 decreased from 26 percent in 1980 to 9 percent 2000, while the percent age 40 to 49 grew from 20 to 35 percent. Job dissatisfaction has also been identified as a major factor contributing to the current problems of recruiting and retaining nurses. A recent Federation of Nurses and Health Professionals (FNHP) survey found that half of the currently employed RNs who were surveyed had considered leaving the patient-care field for reasons other than retirement over the past 2 years. Over one-fourth (28 percent) of RNs responding to a 1999 survey by The Nursing Executive Center described themselves as somewhat or very dissatisfied with their jobs, and about half (51 percent) were less or much less satisfied with their jobs than they were 2 years ago. In that same survey, 32 percent of general medical/surgical RNs, who constitute the bulk of hospital RNs, indicated that they were dissatisfied with their current jobs. According to a survey conducted by the American Nurses Association, 54.8 percent of RNs and LPNs responding would not recommend the nursing profession as a career for their children or friends, while 23 percent would actively discourage someone close to them from entering the profession. Inadequate staffing, heavy workloads, and the increased use of overtime are frequently cited as key areas of job dissatisfaction among nurses. According to the recent FNHP survey, of those RNs responding who had considered leaving the patient-care field for reasons other than retirement over the past 2 years, 56 percent indicated that they wanted a less stressful and less physically demanding job. The same survey found that 55 percent of current RNs were either just somewhat or not satisfied by their facility’s staffing levels, while 43 percent of current RNs surveyed indicated that increased staffing would do the most to improve their jobs. Another survey found that 36 percent of RNs in their current job more than 1 year were very or somewhat dissatisfied with the intensity of their work. Some providers report increased use of overtime for employees. Twenty-two percent of nurses responding to the FNHP survey said they were concerned about schedules and hours. A survey of North Carolina hospitals conducted in 2000 found significant reliance on overtime for staff nurses. Nine percent of rural hospitals reported spending more than 25 percent of their nursing budget on overtime, and, among urban hospitals, 49 percent expected to increase their use of overtime in the coming year. The trend toward increasing use of overtime is currently a major concern of nurse unions and associations. Nurses have also expressed dissatisfaction with a decrease in the amount of support staff available to them over the past few years. More than half the RNs responding to the recent study by the American Hospital Association (AHA) did not feel that their hospitals provided adequate support services. RNs, LPNs, and others responding to a survey by the ANA also pointed to a decrease of needed support services. Current nurse workforce issues are part of a larger health care workforce shortage that includes a shortage of nurse aides. Some nurses have also expressed dissatisfaction with their wages. While surveys indicate that increased wages might encourage nurses to stay at their jobs, money is not always cited as the primary reason for job dissatisfaction. According to the FNHP survey, of those RNs responding who had considered leaving the patient-care field for reasons other than retirement over the past 2 years, 18 percent wanted more money, versus 56 percent who were concerned about the stress and physical demands of the job. However, the same study reported that 27 percent of current RNs responding cited higher wages or better health care benefits as a way of improving their jobs. Another study indicated that 39 percent of RNs who had been in their current jobs for more than 1 year were dissatisfied with their total compensation, but 48 percent were dissatisfied with the level of recognition they received from their employers. AHA recently reported on a survey that found that 57 percent of responding RNs said that their salaries were adequate, compared to 33.4 percent who thought their facility was adequately staffed, and 29.1 percent who said that their hospital administrations listened and responded to their concerns. Wages can have a long-term impact on the size of a workforce pool as well as a short-term effect on people’s willingness to work. After several years of real earnings growth following the last nursing shortage, RN earnings growth lagged behind the rate of inflation from 1994 through 1997. In 2 of the last 3 years, however, 1998 and 2000, RN earnings growth exceeded the rate of inflation. The cumulative effects of these changes are such that RN earnings have just kept pace with the rate of inflation from 1989 to 2000 as shown in figure 2. A serious shortage of nurses is expected in the future as pressures are exerted on both demand and supply. The future demand for nurses is expected to increase dramatically when the baby boomers reach their 60s, 70s, and beyond. The population age 65 years and older will double between 2000 to 2030. During that same period the number of women between 25 and 54 years of age, who have traditionally formed the core of the nurse workforce, is expected to remain relatively unchanged. This potential mismatch between future supply of and demand for caregivers is illustrated by the change in the expected ratio of potential care providers to potential care recipients. As shown in figure 3, the ratio of the working- age population, age 18 to 64, to the population over age 85 will decline from 39.5 workers for each person 85 and older in 2000, to 22.1 in 2030, and 14.8 in 2040. The ratio of women age 20 to 54, the cohort most likely to be working either as nurses or nurse aides, to the population age 85 and older will decline from 16.1 in 2000 to 8.5 in 2030, and 5.7 in 2040. Unless more young people choose to go into the nursing profession, the nurse workforce will continue to age. By 2010, approximately 40 percent of the workforce will likely be older than 50. By 2020, the total number of full time equivalent RNs is projected to have fallen 20 percent below HRSA’s projections of the number of RNs that will be required to meet demand. Providers’ current difficulty recruiting and retaining nurses may worsen as the demand for nurses increases with the aging of the population. Impending demographic changes are widening the gap between the numbers of people needing care and those available to provide it. Moreover, the current high levels of job dissatisfaction among nurses may also play a crucial role in determining the extent of current and future nurse shortages. Efforts undertaken to improve the workplace environment may both reduce the likelihood of nurses leaving the field and encourage more young people to enter the nursing profession. While state governments and providers have begun to address recruitment and retention issues related to the nurse workforce, more detailed data are needed to assist in planning and targeting corrective efforts. As we agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to interested parties and make copies available to others upon request. If you or your staff have any questions, please call me on (202)512-7119 or Helene Toiv, Assistant Director, at (202)512-7162. Other major contributors were Eric Anderson, Connie Peebles Barrow, Emily Gamble Gardiner, and Pamela Ruffner.
The nation's hospitals and nursing homes rely heavily on the services of nurses. Concerns have been raised about whether the current and projected supply of nurses will meet the nation's needs. This report reviews (1) whether evidence of a nursing shortage exists, (2) the reasons for current nurse recruitment and retention problems, and (3) what is known about the projected future supply of and demand for nurses. GAO found that national data are not adequate to describe the nature and extent of nurse workforce shortages, nor are data sufficiently sensitive or current to compare nurse workforce availability across states, specialties, or provider types. Multiple factors affect recruitment and retention problems, including the aging of the nurse workforce fewer younger people are entering the profession. A serious shortage of nurses is expected in the future as demographic pressures influence both demand and supply.
The National Aeronautics and Space Administration Authorization Act of 2010 directed NASA to, among another things, develop a Space Launch System as a follow-on to the Space Shuttle and as a key component in expanding human presence beyond low-Earth orbit. To that end, NASA plans to incrementally develop three progressively more capable SLS launch vehicles—70-, 105-, and 130-metric ton (mt) variants. When complete, the 130-mt vehicle is expected to have more launch capability than the Saturn V vehicle, which was used for Apollo missions, and be significantly more capable than any recent or current launch vehicle. The act also directed NASA to prioritize the core elements of SLS with the goal of operational capability not later than December 2016. NASA negotiated an extension of that date, to December 2017, based on the agency’s initial assessment of the tasks associated with developing the new launch vehicle, and has subsequently committed to a launch readiness date of November 2018. In 2011, NASA formally established the SLS program. To fulfill the direction of the 2010 act, the agency plans to develop the three SLS launch vehicle capabilities, complemented by Orion, to transport humans and cargo into space. The first version of the SLS that NASA is developing is a 70-mt launch vehicle known as Block I. NASA has committed to conduct two test flights of the Block I vehicle—the first in 2018 and the second in 2021. The vehicle is scheduled to fly an uncrewed Orion some 70,000 kilometers beyond the moon during the first test flight, known as Exploration Mission-1 (EM-1), and to fly a second mission known as Exploration Mission-2 (EM-2) beyond the moon to further test performance with a crewed Orion vehicle. After 2021, NASA intends to build 105- and 130-mt launch vehicles, known respectively as Block IA/B and Block II, which it expects to use as the backbone of manned spaceflight for decades. NASA anticipates using the Block IA/B vehicles for destinations such as near-Earth asteroids and LaGrange points and the Block II vehicles for eventual Mars missions. Space launch vehicle development efforts are high risk from technical, programmatic, and oversight perspectives. The technical risk is inherent for a variety of reasons including the environment in which they must operate, complexity of technologies and designs, and limited room for error in the fabrication and integration process. Managing the development process is complex for reasons that go well beyond technology and design. For instance, at the strategic level, because launch vehicle programs can span many years and be very costly, programs often face difficulties securing and sustaining funding commitments and support. At the program level, if the lines of communication between engineers, managers, and senior leaders are not clear, risks that pose significant threats could go unrecognized and unmitigated. If there are pressures to deliver a capability within a short period of time, programs may be incentivized to overlap development and production activities or delete tests, which could result in late discovery of significant technical problems that require more money and ultimately much more time to address. For these reasons, it is imperative that launch vehicle development efforts adopt disciplined practices and lessons learned from past programs. Best practices for acquisition programs indicate that establishing baselines that match cost and schedule resources to requirements and rationally balancing cost, schedule, and performance is a key step in establishing a successful acquisition program. Our work has also shown that validating this match before committing resources to development helps to mitigate the risks inherent in NASA’s programs. We have reported that within NASA’s acquisition life cycle, resources should be matched to requirements at key decision point (KDP)-C, the review that commits the program to formal cost and schedule baselines and marks the transition from the formulation phase into the implementation phase, as seen in figure 1 below. The SLS program completed its KDP-C review in August 2014, GSDO completed its KDP-C review in September 2014, and the KDP-C review for Orion is currently scheduled for May 2015. NASA has taken positive steps to address specific concerns we raised in July 2014 regarding aggressive schedules and insufficient funding by establishing the SLS program’s committed launch readiness date as November 2018—almost a year later than originally planned. Specifically, we reported in July 2014 that NASA had yet to establish baselines that matched the SLS program’s cost and schedule resources with the requirement to develop the SLS and launch the first flight test in December 2017 at the required confidence level of 70 percent. NASA policy generally requires a 70 percent joint confidence level—a calculation NASA uses to estimate the probable success of a program meeting its cost and schedule targets—for a program to proceed with final design and fabrication. At the time of our July 2014 report, NASA had delayed its review to formally commit the agency to cost and schedule baselines for SLS from October 2013, as the agency considered future funding plans for the program. At that time, the agency’s funding plan for SLS was insufficient to match requirements to resources for the December 2017 flight test at the 70 percent joint confidence level and the agency’s options for matching resources to requirements were largely limited to increasing program funding, delaying the schedule, or accepting a reduced confidence level for the initial flight test. We have previously reported that it is important for NASA to budget projects to appropriate confidence levels, as past studies have linked cost growth to insufficient reserves, poorly phased funding profiles, and more generally, optimistic estimating practices. We found that NASA’s proposed funding levels had affected the SLS program’s ability to match requirements to resources since its inception. NASA has requested relatively consistent amounts of funding of about $1.4 billion each year since 2012. According to agency officials, the program has taken steps to operate within that flat funding profile, including streamlining program office operations and asking each contractor to identify efficiencies in its production processes. Even so, according to the program’s own analysis, going into the agency review to formally set baselines, SLS’s top risk was that the current planned budget through 2017 would be insufficient to allow the SLS as designed to meet the EM-1 flight date. The SLS program office calculated the risk associated with insufficient funding through 2017 as 90 percent likely to occur; furthermore, it indicated the insufficient budget could push the December 2017 launch date out 6 months and add some $400 million to the overall cost of SLS development. The cost risk was considerably greater than $400 million in the past, but according to program officials they were able to reduce the affect due to receiving more funding than requested in fiscal years 2013 and 2014. Similarly, our ongoing work on human exploration programs has found that the Orion program is currently tracking a funding risk that the program could require an additional $560 to $840 million to meet the December 2017 EM-1 flight date. However, the agency has yet to complete the review that sets formal cost or schedule baselines for the Orion program. At this time, we have not conducted enough in-depth work on the GSDO program to comment on any specific risks the program is tracking. In our July 2014 report we recommended, among other things, that NASA develop baselines for SLS based on matching cost and schedule resources to requirements that would result in a level of risk commensurate with its policies. NASA concurred with our findings and recommendations. In August 2014, NASA established formal cost and schedule baselines for the SLS program at the 70 percent joint confidence level for a committed launch readiness date of November 2018. Nevertheless, the program plans to continue to pursue an initial capability of SLS by December 2017 as an internal goal and has calculated a joint cost and schedule confidence level of 30 percent associated with that date. As illustrated by table 1 below, the SLS and GSDO programs are pursuing ambitious and varying target dates for the EM-1 test flight. In addition, the Orion program is currently tracking and reporting to December 2017. The agency acknowledges differences in the target dates the programs are pursuing and has indicated that it will develop an integrated target launch date after all three systems hold their individual critical design reviews. The SLS program has assigned a low confidence level—30 percent— associated with meeting the program’s internal target date of December 2017. Even if SLS does meet that goal, however, it is unlikely that both Orion and GSDO will achieve launch readiness by that point. For example, the GSDO program only has a 30 percent confidence level associated with a later June 2018 date. Additionally, the Orion program is currently behind its planned schedule and is facing significant technical risks and officials indicated that the program will not achieve launch readiness by December 2017. The Orion program has submitted a schedule to NASA headquarters that indicates the program is now developing plans for a September 2018 EM-1 launch, though that date is preliminary until the program establishes official cost and schedule baselines now planned for May 2015. With the Orion and GSDO programs likely unable to meet the December 2017 date, NASA risks exhausting limited human exploration resources to achieve an aggressive SLS program schedule when those resources may be needed to resolve other issues within the human exploration effort. In other work, we have reported that in pursuing internal schedule goals, some programs have exhausted cost reserves, which has resulted in the need for additional funding to support the agency baseline commitment date once the target date is not achieved. NASA’s urgency to complete development and demonstrate a human launch capability as soon as possible is understandable. The United States has lacked the ability to launch humans into space since the last flight of the Space Shuttle in July 2011 and the initial goal from Congress was that NASA demonstrate a new human launch capability by 2016. Also, the SLS and GSDO programs have already slipped their committed launch readiness dates to November 2018, and Orion appears likely to follow suit. While these delays were appropriate actions on the agency’s part to reduce risk, their compounding effect could have impacts on the first crewed flight—EM-2—currently scheduled for 2021. We reported in July 2014 that NASA’s metrics indicated the SLS program was on track to meet many of its design goals for demonstrating the initial capability of SLS. However, we found that the development of the core stage—SLS’s fuel tank and structural backbone—represents the critical path of activities that must be completed to maintain the program’s schedule as a whole. The core stage development had an aggressive schedule in order to meet the planned December 2017 first test flight. For example, the core stage had threats of nearly 5 months to its schedule due to difficulty acquiring liquid oxygen fuel lines capable of meeting SLS operational requirements. The aggressiveness of, and therefore the risk associated with the core stage schedule was reduced when the agency delayed its commitment for initial capability of SLS until November 2018. With SLS continuing to pursue a target date of December 2017, however, the aggressive core stage schedule remains a risk. Further, we reported that the program faced challenges integrating heritage hardware, which was designed for less stressful operational environments, into the SLS design. We found that these issues were not significant schedule drivers for the program as each had, and continues to have, significant amounts of schedule reserve to both the target and agency baseline commitment dates for launch readiness. The Orion program just completed its first experimental test flight—EFT-1. This flight tested Orion systems critical to crew safety, such as heat shield performance, separation events, avionics and software performance, attitude control and guidance, parachute deployment, and recovery operations. According to NASA, the data gathered during the flight will influence design decisions and validate existing computer models. Data from this flight are required to address several significant risks that the Orion program is currently tracking that must be addressed before humans can be flown on Orion. Specifically, our ongoing work indicates that the Orion program passed its preliminary design review—a review that evaluates the adequacy of cost schedule and technical baselines and whether the program is ready to move forward—in August 2014 by meeting the minimum standards for all 10 success criteria. For 7 of the 10 success criteria, however, review officials highlighted known issues that could compromise Orion’s success. Specifically, the review officials noted concerns about several unresolved design risks, including technical challenges with the parachute system and heat shield. For example, during parachute testing, NASA discovered that when only two of the three main parachutes are deployed, they begin to swing past each other creating a “pendulum” effect. This effect could cause the capsule to increase speed and to hit the water at an angle that may damage the capsule thereby endangering the crew. Further, NASA faces choices between differing design solutions to resolve cracking issues discovered during manufacturing of the heat shield that protects the capsule during re-entry. Program officials plan to make a decision prior to the program’s critical design review, based on additional testing and analysis, about how to resolve these risks with a goal of limiting design changes to the capsule’s structure. Both the parachute and heat shield challenges must be resolved before EM-2 because each represents a significant risk to crew safety. Significant cost and schedule impacts could result if a redesign is required to address any of these unresolved design risks. NASA has yet to address our concerns regarding mission planning or life- cycle cost estimates. NASA has not yet defined specific mission requirements for any variant of the SLS. The two currently scheduled flights are developmental test flights designed to demonstrate and test the capabilities of the 70-mt launch vehicle and the capability of the core stage in particular. Office of Management and Budget guidance indicates that agencies should develop long-range objectives, supported by detailed budgets and plans that identify the agency’s performance gaps and the resources needed to close them. With mission requirements unspecified, NASA has not yet finalized plans for the next step in evolving the SLS and risks investing limited available resources in systems and designs that are not yet needed and missing opportunities to make early investments in developing systems that may be needed in the future. According to agency officials, beyond the two scheduled test flights, future mission destinations remain uncertain. In the absence of specific mission requirements, officials indicated the SLS program is developing current and future variants based on top-level requirements derived from NASA’s Design Reference Architectures for conducting missions in line with the agency’s strategic plan. NASA’s 2014 strategic plan, for example, identifies sending humans to Mars as one of the agency’s long-term goals; in turn, the agency’s Mars Design Reference Architecture indicates that multiple missions using a vehicle with a lift capability of about 130-mt will be necessary to support that goal. We recommended based on these findings that NASA define a range of possible missions beyond the second test flight and introduce increased competition in the acquisition of hardware needed for future variants to reduce long-term costs. The agency concurred with our recommendations, but has not yet taken specific actions to address our concerns The long-term affordability of the human exploration programs are also uncertain, as we found in May 2014, because NASA’s cost estimates for the programs do not provide any information about the longer-term, life- cycle costs of developing, manufacturing, and operating the launch vehicles.estimate for SLS does not cover program costs after EM-1 or costs to design, develop, build, and produce the 105- or 130-mt variants. Though the subsequent variants will evolve from the first variant, they each represent substantial, challenging development efforts and will require billions of more dollars to complete. For example, the 105-mt vehicle will require development of a new upper stage and upper stage engine or the development of advanced boosters, either of which will be significant efforts for the program. If you or your staff have any questions about this testimony, please contact Cristina T. Chaplain, Director, Acquisition and Sourcing Management at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Shelby S. Oakley, Assistant Director; Jennifer Echard; Laura Greifner; Sylvia Schatz; Ryan Stott; Ozzy Trevino; Kristin Van Wychen; and John S. Warren, Jr. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
NASA is undertaking a trio of closely related programs to continue human space exploration beyond low-Earth orbit: the SLS vehicle; the Orion capsule, which will launch atop the SLS and carry astronauts; and GSDO, the supporting ground systems. As a whole, the efforts represent NASA's largest exploration investment over the next decade, approaching $23 billion, to demonstrate initial capabilities. In May 2014, GAO found that NASA's preliminary life-cycle cost estimates for human exploration were incomplete and recommended that NASA establish life-cycle cost and schedule baselines for each upgraded block of SLS, Orion, and GSDO; NASA partially concurred. In July 2014, GAO issued a report on SLS's progress toward its first test flight and recommended that NASA match SLS's resources to its requirements and define specific missions beyond the second test flight, among other actions. NASA concurred with these recommendations. This testimony is based on GAO's May 2014 report ( GAO-14-385 ), July 2014 report ( GAO-14-631 ), and ongoing audit work related to SLS and Orion. It discusses NASA's efforts to match resources to requirements for the SLS program and developmental challenges facing the SLS and Orion programs. To conduct this work, GAO reviewed relevant design, development, cost, and schedule documents and interviewed program officials. In 2014, GAO reported on a number of issues related to the National Aeronautics and Space Administration's (NASA) human exploration programs: the Space Launch System (SLS) vehicle, the Orion Multi-Purpose Crew Vehicle (Orion), and the Ground Systems Development and Operations (GSDO). For example, in July 2014, GAO found that NASA had not matched resources to requirements for the SLS program and was pursuing an aggressive development schedule—a situation compounded by the agency's reluctance to request funding commensurate with the program's needs. In August 2014, NASA established formal cost and schedule baselines for the SLS program at the agency-required 70 percent joint cost and schedule confidence level (JCL), which satisfied one recommendation from GAO's July 2014 report. The JCL is a calculation NASA uses to estimate the probable success of a program meeting its cost and schedule targets. To satisfy the 70 percent JCL requirement, the SLS program delayed its committed launch readiness date for its first test flight from December 2017 to November 2018. The program is still pursuing December 2017 as an internal goal, or target date, for the test flight, even though NASA calculated the JCL associated with launching SLS on this date at 30 percent. Moreover, neither the Orion nor GSDO program expects to be ready for the December 2017 launch date. With these programs likely unable to meet the December 2017 date, NASA risks exhausting limited human exploration resources to achieve an accelerated SLS program schedule when those resources may be needed to resolve challenges on other human exploration programs. In addition, GAO's ongoing work has found that the Orion program is facing significant technical and funding issues. Orion just completed its first test flight, and data from this flight is required to address several risks that must be resolved before the second test flight in 2021 because they represent risks to crew safety. For example, during parachute testing, NASA discovered that when only two of the three main parachutes are deployed, they begin to swing past each other creating a “pendulum” effect. This effect could cause the capsule to increase speed and to hit the water at an angle that may damage the capsule, thereby endangering the crew. In addition, data from the test is necessary to inform NASA's design solution to address heat shield cracking issues, which NASA has been working to resolve since August 2013. The heat shield is integral to crew safety during re-entry.
The most familiar part of USPS’s retail network is the post office. In fiscal year 2015, there were approximately 26,600 post offices across the country, largely unchanged from fiscal year 2005 (see fig. 1). Post offices are a key part of USPS’s revenue stream—accounting for about 56 percent of USPS’s total retail revenue of about $19 billion in fiscal year 2015. Prior to the introduction of POStPlan, post offices were each managed by postmasters. USPS also uses other facilities to provide key services, such as selling stamps. Over the past decade, the USPS workforce has declined and changed in composition, but continues to account for almost 80 percent of USPS’s total operating costs ($58 of $74 billion in fiscal year 2015). From fiscal years 2005 to 2015, USPS’s workforce decreased from 803,000 to approximately 622,000 employees, or by about 23 percent (see fig. 2). During this period, career employees decreased (from approximately 704,700 to 491,900 or by about 30 percent), while non-career employees increased (from approximately 98,300 to 130,000 or by about 32 percent). Career positions—which are generally full time but also may be part- time—are eligible for annual and sick leave, health insurance, life insurance, and retirement benefits. Non-career employees supplement the career workforce and receive lower wages. They are not eligible for life insurance or retirement benefits, but some are eligible for specified types of health insurance upon hiring while others are eligible after serving at least 1 year. About 90 percent of USPS’s career employees—and some types of non- career employees, such as Postal Support Employees—are covered by collective bargaining agreements and represented through unions. APWU, one of USPS’s largest unions, represents over 200,000 USPS employees in the clerk, maintenance, motor vehicle, and support services employee “crafts.” The USPS-APWU 2010-2015 Collective Bargaining Agreement (CBA) contains various provisions that specify rules associated with the performance of bargaining-unit work (such as staffing the retail window and placing mail in customers’ post office boxes) by USPS employees. For example, the agreement specifies that USPS should assign new or revised positions that contain non-supervisory duties to the most appropriate employee craft and that USPS should consult with APWU before doing so. Two associations represent USPS’s postmasters, who are not covered by CBAs: NAPUS and NLPM. USPS is required to consult with these associations on planning, developing, and implementing certain programs and policies—like POStPlan—that affect them. In May 2012, USPS announced the POStPlan initiative. POStPlan sought to right-size USPS’s retail network of—at the time—26,703 post offices. Generally, POStPlan had two elements: reduce retail window service hours at some offices to better match actual customer use, and change the staffing arrangements at those offices to reduce labor costs. According to USPS officials, they informed APWU of POStPlan in May 2012, after announcing the initiative. To evaluate which offices may be appropriate for hour reductions, in December 2011, USPS analyzed the daily workload—as a proxy for customer use—at 17,728 offices. Through this analysis, USPS determined that it could reduce hours at 13,167 of these offices from 8- to 2-, 4-, or 6-hours of retail service a day. Post offices are classified into “levels” and, under POStPlan, these reduced-hour offices would be classified into a new set of levels that correspond with the number of hours of retail service they would provide per day (i.e., Level 2, Level 4, and Level 6). USPS also determined that the remaining 4,561 offices it analyzed should continue to provide 8 hours of retail service a day; USPS classified these offices as Level 18 offices. USPS planned for most of the reduced-hour offices to be managed remotely. That is, under POStPlan, Level 2, 4, and 6 offices would be considered “remotely managed post offices” (RMPO) and they would report to a postmaster at a Level 18 or above “administrative” post office. USPS created an exception for offices it considered especially isolated. These offices would not be remotely managed and would, instead, be called “part time post offices” (PTPO); all PTPOs would be Level 6 offices. According to USPS officials, Level 2, 4, and 6 RMPOs and PTPOs are the “POStPlan post offices;” Level 18 or above offices are not considered POStPlan post offices. USPS plans to review workloads at POStPlan RMPOs annually and, based on these reviews, may increase or decrease the number of hours of retail service at these offices. USPS also plans to review the workload at the Level 18 and above offices through USPS’s separate, pre- POStPlan processes, and based on the results, USPS may designate any qualifying office a POStPlan post office and reduce its hours accordingly if its workload justifies a reduction in hours. Regarding the staffing arrangements at these offices, USPS planned to replace career postmasters in the POStPlan post offices with less costly non-career or part-time employees, as shown in fig. 3. Level 18 offices would continue to be staffed by career, full-time postmasters. On July 9, 2012, APWU filed a labor grievance claiming the changes introduced by POStPlan violated provisions of the USPS-APWU 2010- 2015 CBA. USPS officials said they had the authority to modify the POStPlan initiative during the grievance procedure but decided to proceed with POStPlan implementation because they believed it was the proper operational decision for its customers, employees, and USPS. As a result, USPS continued with POStPlan implementation until September 2014, when—as discussed later in this report—an independent arbitrator issued a decision that resolved the grievance. Prior to the issuance of the POStPlan arbitration decision in September 2014, USPS had taken steps to reduce hours at almost three-quarters of POStPlan post offices. After announcing POStPlan in May 2012, USPS began implementation by reviewing its determinations on: (1) which offices would have reduced hours, (2) which were considered especially isolated, (3) which would be reclassified as Level 18, and (4) which would become administrative offices. In July 2012, USPS finalized those decisions and communicated the results to relevant field personnel, who had the opportunity to advise on any potential concerns that could not be identified at the USPS headquarters level. In September 2012, USPS began surveying residents of the affected communities to give them an opportunity to provide input before reducing their office’s hours. The survey asked whether they preferred USPS continue with its plan to reduce hours or whether they preferred USPS close their office and institute alternatives, such as relocating post office box service to a nearby office. In October 2012, USPS began holding meetings in the communities to communicate the survey results and consider feedback. Thereafter, USPS continued to conduct meetings and reduce hours at offices on a rolling basis, with the first reductions occurring in November 2012 and most occurring within the first year of POStPlan’s announcement (see fig. 4). Specifically, from November 2012 through August 2014, USPS reduced hours at 9,159 post offices, or at about 72 percent of the almost 12,800 that would ultimately have hours reduced under POStPlan. According to USPS officials, they implemented POStPlan on a rolling basis to make building modifications to some offices (to ensure that customers could maintain access to their post office box even with reduced hours) and to minimize the effect on POStPlan-affected postmasters. For example, implementing POStPlan on a rolling basis allowed affected postmasters more time to find reassignment opportunities, as described below. In addition to reducing hours at over 9,000 of the POStPlan post offices, USPS simultaneously took steps to make the necessary staffing changes and provide options for postmasters to separate from USPS or be reassigned to other positions ahead of a planned “reduction in force” (RIF). USPS announced a $20,000 separation incentive offer for all postmasters in May 2012, followed by a $10,000 offer in July 2014 to those POStPlan-affected postmasters who did not accept the first incentive offer. In May 2012, USPS also began periodically posting vacancies that POStPlan-affected postmasters could apply to, such as positions that became available as postmasters retired through the May 2012 separation incentive. Postmasters in offices set to become Level 6 offices could also opt to remain in their office and accept a demotion to the new, part-time position. According to USPS officials, as postmasters separated from USPS or accepted reassignments, USPS filled the positions according to its new POStPlan staffing arrangements. USPS initially intended to complete POStPlan implementation by September 2014, with any POStPlan-affected postmasters who had not separated from USPS or been reassigned to an alternate position as of this date to be separated via RIF. However, USPS extended this deadline twice during implementation—first to January then February of 2015—in order to, according to USPS officials, find reassignment opportunities for as many POStPlan-affected postmasters as possible. By September 2014, about 4,100 POStPlan-affected postmasters had separated from USPS and about 5,800 had been reassigned to a different position. In July 2012, USPS estimated it would achieve $516 million annually in labor cost savings once POStPlan had been fully implemented for a complete year (that is, once retail hours had been adjusted in all POStPlan post offices). Given that USPS originally intended to complete implementation by September 2014, this means the program would have been implemented for a complete year in September 2015, with full annual cost savings beginning in fiscal year 2016. To develop this estimate, USPS calculated “before POStPlan” and “after POStPlan” labor costs at the approximately 13,000 POStPlan post offices and at the Level 18 offices using average salary and benefits data as of pay period 6 of fiscal year 2012. To arrive at the “before POStPlan” labor cost, USPS multiplied the number of post offices at each applicable, pre- POStPlan office level by the average salary and benefits that career postmasters at those levels earn, then totaled the results. To arrive at the “after POStPlan” labor cost, USPS multiplied the number of offices at each post-POStPlan office level by the projected salary and benefits it expected for employees that would staff those offices (based on the new POStPlan staffing arrangements) then totaled the results. The $516 million represents the difference between these “before” and “after” calculations. In June 2015, USPS revised this original estimate to $518 million in annual labor cost savings based on: (1) the actual savings it estimated it achieved from fiscal years 2012 to 2014, (2) the remaining savings it expected to achieve from offices whose hours had been reduced in the prior year, and (3) the savings it expected to achieve from offices whose hours had not yet been reduced. On September 5, 2014, an impartial arbitrator resolved APWU’s POStPlan grievance and ruled that the staffing changes introduced by POStPlan violated certain provisions of the USPS-APWU 2010-2015 CBA, and that USPS must reverse several of these changes. The arbitrator agreed with APWU’s argument that, under POStPlan, employees in Level 4 and 6 RMPOs were no longer performing any managerial or supervisory work and also that the work was clerical in nature and should be assigned to bargaining-unit employees. As a result, according to USPS officials, the arbitration decision significantly changed staffing in these offices, which account for about 82 percent of POStPlan post offices as of August 2015, by awarding all non-bargaining- unit positions in them to APWU-represented employees. The arbitrator’s decision on staffing in Level 4 RMPOs also affected the resolution of a separate dispute. Specifically, in the POStPlan arbitration decision, the arbitrator also ruled on a dispute regarding the type of work assignments that staff in Level 18 offices could perform, finding certain Level 18 offices must be staffed by a career employee (see fig. 5). USPS continued to modify hours at POStPlan post offices as these changes were taking place. According to USPS officials, subsequent memorandums of understanding between USPS and APWU mitigated some of what the officials believe could have been potentially negative effects of the arbitration decision. According to USPS officials as of February 2016, staffing changes related to POStPlan and the arbitration decision are complete. USPS, NAPUS, and NLPM officials told us that managing employee work rules under the post-arbitration staffing arrangements is more complex than under the original POStPlan staffing arrangements. They noted that this is because each employee category has different work rules to manage and there were fewer employee categories under the original POStPlan staffing arrangements. USPS estimated that, due to the arbitration decision, annual POStPlan cost savings will be lower than originally expected. Specifically, in June 2015, USPS estimated that the decision will reduce estimated annual cost savings by $181 million, which is approximately 35 percent less than the revised estimate of $518 million. As a result, USPS projected that POStPlan will now result in total annual labor cost savings of about $337 million. To develop the estimate of the impact from the arbitration decision, USPS used a slightly different approach than it had used to develop its original cost-savings estimate. Specifically, USPS calculated the difference between the hourly salary and benefit rates for employees in the Level 4 and 6 POStPlan post offices under the original, pre-arbitration POStPlan staffing arrangements and under the post-arbitration POStPlan staffing arrangements. It then multiplied the rate differences by the total hours worked per year at the applicable offices and totaled the results. This resulted in a difference of $181 million. USPS then subtracted the $181 million from the $518 million in annual savings it expected to achieve to arrive at the revised estimated annual savings of $337 million. According to USPS officials, USPS developed this estimate using a different approach from its original POStPlan cost-savings estimate because the arbitration decision resulted in a new labor type and rate and USPS believed this was the most logical method to factor in the arbitrator’s decision. USPS attributes the reduced cost savings to the higher compensation employees receive in the POStPlan post offices under the post-arbitration decision staffing arrangements relative to the compensation these employees would have received under the original, pre-arbitration, staffing arrangements, as shown in fig. 6. USPS officials told us that while the arbitration decision reduced the cost savings it expected to achieve, POStPlan was still the correct operational decision for USPS and its stakeholders. We reviewed USPS’s 2012 original POStPlan cost-savings estimate and 2015 estimate of the arbitration decision’s impact on cost savings and found that while POStPlan most likely resulted in some cost savings, the estimates have limitations that affect their reliability. Specifically, the limitations include: (1) imprecise and incomplete labor costs, including errors in the underlying data that affect the accuracy of calculations of actual savings achieved; (2) lack of a sensitivity review; and (3) the exclusion of other factors that would be necessary to consider the net cost savings of the POStPlan initiative, particularly the potential impact of reduced hours on retail revenue. Our guidance on assessing data reliability states that reliable data, which include estimates and projections, can be characterized as being accurate, valid, and complete. For example, accurate data appropriately reflect the actual underlying information, valid data actually represent what is being measured, and complete data appropriately include all relevant information. Data should also be consistent, a subset of accuracy. Consistency can be impaired when there is an inconsistent interpretation of what data should be entered. Internal control standards adopted by USPS also state that program managers and decision makers need complete and accurate data to determine whether they are meeting their goals, and that they should use quality information to make informed decisions and evaluate an entity’s performance in achieving key objectives and addressing risks. These standards also note that the ability to generate quality information begins with the data used. While USPS’s original estimate of the savings it expected to achieve from POStPlan clearly states that it accounts for labor costs only, we found that the salary and benefits information that USPS used to calculate these labor costs was imprecise, and this imprecision contributes to inaccuracies in the estimate. For example: When calculating the “before POStPlan” labor costs, USPS used average postmaster salaries and benefits and, when calculating the “after POStPlan” costs, sometimes used the salary and benefits of newly hired postmasters and in other instances used the salary and benefits of incumbent postmasters. In a POStPlan advisory opinion, PRC noted that using an average postmaster salary is imprecise; that salaries at post offices vary, on average, by as much as $20,000 from the lowest to the highest salary; and that these variations can add up considerably when thousands of offices are considered. Although USPS used average postmaster salaries and benefits for the “before POStPlan” labor costs, approximately 3,100 of the post offices included in the calculation were not being staffed by postmasters. These offices were being staffed by other types of employees, such as non-postmasters designated as “Officers in Charge,” whose salaries were generally lower. In the POStPlan advisory opinion, PRC estimated that if it assumed salaries at these offices were at a level more representative of these other types of employees, the annual cost savings would be $386 million, not $516 million. minimum salary for that grade, a difference of as much as $25,000. In the POStPlan advisory opinion, PRC explained that this may have overstated these costs and estimated that if these assumptions were corrected, the annual cost savings would be $704 million, not $516 million. USPS included about 100 post offices that were actually closed or suspended in its calculation of labor costs despite stating that suspended offices were not part of POStPlan, that it would not re-visit closed offices’ status, and that there were no plans to reopen these offices. In its POStPlan advisory opinion, PRC estimated that the cost savings would be $513 million, not $516 million, if USPS excluded these offices. Similar to the original POStPlan cost-savings estimate, USPS’s estimate of the arbitration decision’s impact on cost savings has limitations related to imprecise labor costs, which, as noted above, contribute to inaccuracies. For example: USPS used a single, proxy employee category and hourly rate to represent all employees under the pre-arbitration POStPlan staffing arrangements, rather than the actual different rates these employees would have received, as described above. USPS used this proxy although it had the actual rates, and none of the actual rates matched the proxy rate. USPS included all Level 6 post offices and their associated positions’ labor costs in its estimate. However, the arbitration decision did not affect the Level 6 PTPOs. This is inconsistent with how USPS treated Level 2 RMPOs in the estimate. These RMPOs were also not affected by the arbitration decision. Removing the Level 6 PTPOs from the estimate reduces the impact from about $181 million to about $170 million, meaning the revised savings would have been $348, not $337, million. USPS’s post-arbitration decision estimate of $337 million in expected annual cost savings relies, in part, on USPS calculations of actual savings achieved due to POStPlan, but the accuracy of these actual savings calculations may be limited by errors in the underlying salaries and benefits data used to develop them. As described above, to arrive at $337 million, USPS subtracted the $181-million impact it calculated from the revised estimate of $518 million it developed in June 2015. Also as noted above, USPS developed that $518 million estimate in part by considering the actual savings it achieved from fiscal years 2012 to 2014. However, we found errors in USPS’s salaries and benefits data that, according to USPS officials as of March 2016, may have been caused by employees’ workhours being incorrectly recorded when employees worked in more than one office. We found that these errors would result in some offices’ salaries and benefits being understated, and others being overstated. While understated and overstated costs at individual offices would likely offset each other in aggregate (i.e., when costs at all offices, either POStPlan or non-POStPlan, were considered), they do not offset when analyzing costs at just POStPlan post offices. Given that according to USPS, its calculations of actual savings achieved consider costs at POStPlan—but not non-POStPlan— offices, the calculations may be limited by these errors. Additionally, according to USPS as of October 2015, thus far it has saved $306 million in labor costs from fiscal year 2012 to June 2015 as a result of POStPlan. Although POStPlan most likely resulted in cost savings because of the overall reduction in work hours at thousands of post offices, the accuracy of these calculated savings may also be limited by these errors. USPS’s calculation of labor costs in both its original and post-arbitration decision estimates was also incomplete. A full estimate of labor costs might have included additional labor cost elements. For example: USPS’s original estimate did not include costs associated with the addition of supervisors at the Level 18 or above offices that remotely manage the POStPlan post offices due to their increased supervisory workload. Specifically, according to USPS officials, USPS added about 320 such positions, though not all as a result of POStPlan, and the average hourly pay for supervisors as of August 2015 was $48.73. USPS’s original estimate did not include one-time labor costs associated with separation incentives USPS offered to postmasters. According to USPS officials, acceptance of these separation incentives by POStPlan-affected postmasters cost USPS about $69 million. USPS’s estimate of the arbitration decision’s impact on cost savings excluded the potential cost impact of staffing changes in Level 18 post offices. Although USPS officials have stated that Level 18 offices are not part of POStPlan, the arbitration decision and a September 2014 memorandum of understanding that further implemented it required that a certain type of position staffing Level 18 offices be changed to a bargaining-unit clerk position. Our cost-estimating best practices state that sensitivity analysis should generally be conducted when estimating costs, especially if changes in key assumptions would likely have a significant effect on the estimate. Sensitivity analyses identify a range of possible cost estimates by varying major assumptions, parameters, and inputs to enable an understanding of the impact altered assumptions have on estimated costs. This can also help managers and decisions makers identify risk areas and relevant program alternatives. Since uncertainty cannot be avoided, it is necessary to identify the elements that represent the most risk, which can be done through sensitivity analysis. In developing its estimates, USPS did not conduct a sensitivity analysis to determine what would happen to estimated costs and savings should key assumptions it was making under POStPlan vary. For example, USPS officials told us that they recognized the possibility that APWU would challenge the planned staffing arrangements at POStPlan post offices. Despite this statement, in its original cost-savings estimate, USPS did not analyze the sensitivity of POStPlan labor costs to alternative staffing arrangements that might have been more in line with APWU’s views on the staffing provisions specified in the USPS-APWU 2010-2015 CBA. USPS officials explained that they believed that savings associated with reduced hours at POStPlan post offices would significantly outweigh any reduction in savings should an arbitrator rule in APWU’s favor. Similarly, USPS did not analyze the sensitivity of its estimated savings to possible changes in the benefits offered to USPS employees. For example, when calculating the salary and benefits of Postmaster Reliefs (PMR)—the employees expected to staff Level 2 and 4 RMPOs—USPS assumed that the only benefit they were eligible for was 1 hour of annual leave for every 20 hours worked. However, in 2014, USPS began providing health coverage for PMRs who meet the requirements of the 2009 Affordable Care Act. Additionally, in both its estimates, USPS did not consider that staffing at offices may continue to change based on the workload re- evaluations it plans to conduct. For example, under the original POStPlan staffing arrangements, a Level 4 RMPO staffed by a PMR earning $14.87 per hour could become a Level 6 RMPO staffed by a part-time postmaster earning $21.17 per hour if, after a re-evaluation of the office’s workload, USPS determines that the office’s workload has increased enough to justify a Level 6 classification. Thus, the number of offices at each level might continue to increase or decrease year after year. This also means that although USPS refers to its estimates as estimates of the “annual” savings it will achieve upon full POStPlan implementation, only a single-year estimate of savings can be produced at any given time, unless and until estimates of potential staffing changes in future years can be made. OMB cost-estimating guidance states that agencies should determine whether an activity’s benefits (savings) also take into account the costs incurred to implement it. That is, the guidance suggests that it is the net benefit, or in this case, the net cost savings that should be considered. However, USPS’s estimate did not include certain factors that could affect the net cost savings of the POStPlan initiative. In particular, USPS’s original estimate did not include an analysis of the extent to which reduced hours at POStPlan post offices could affect revenue at those offices and across USPS. That is, it did not fully consider any offsetting financial losses that should be weighed against estimated savings. In July 2012, USPS testified to PRC that it did not anticipate losing revenue due to POStPlan, though it had not conducted a financial analysis to support this statement. Specifically, as described below, USPS expected any revenue lost at POStPlan post offices to be absorbed elsewhere. Despite this assumption, in its POStPlan advisory opinion, PRC stated that it was concerned that reduced retail hours may lead to reduced revenue and recommended that USPS undertake a post- implementation review of POStPlan to measure changes in revenue at POStPlan post offices. In September 2015, we asked USPS what, if any, steps it had taken to address PRC’s recommendation. At that time, USPS had not yet taken steps to analyze changes in revenue at POStPlan post offices, though in January 2014—in response to a request from PRC— USPS submitted data to PRC on the fiscal year 2013 revenue earned in POStPlan post offices and in the Level 18 and above administrative post offices. USPS officials told us that they planned to conduct a revenue analysis annually, comparing fiscal year over fiscal year, and later provided us with a preliminary analysis of changes from fiscal years 2014 to 2015. USPS’s preliminary POStPlan revenue analysis has limitations that may affect its representation of changes in revenue at POStPlan post offices and across USPS. This analysis showed that walk-in revenue declined by about 4 percent at POStPlan post offices, as well as at non-POStPlan offices, and at all offices in general. However, we found that USPS’s calculation of revenue in POStPlan post offices was inconsistent with its definition of what constitutes POStPlan post offices. Specifically, USPS included revenue from the Level 18 or above administrative offices, though USPS does not define these as POStPlan post offices. Additionally, according to USPS officials, those are the offices most likely to absorb customers who are looking for nearby alternatives in the face of reduced hours at their local office. USPS also excluded the Level 6 PTPOs from its analysis although it considers these to be POStPlan post offices. After we inquired about the Level 6 PTPOs, USPS provided us with a revised analysis but, in this revision, USPS included the Level 18 and above administrative offices as POStPlan post offices. When we re- sorted the offices in USPS’s analysis to exclude the Level 18 and above administrative offices from the “POStPlan post offices” category and include the Level 6 PTPOs in the “POStPlan post offices” category, we found that revenue declined by about 10 percent, not 4 percent in POStPlan post offices and by about 4 percent in non-POStPlan post offices. To obtain a more comprehensive picture of how POStPlan may have affected revenue in the reduced-hour offices, we also analyzed the walk- in revenue earned at POStPlan post offices, by office level, for the most recent fiscal year (2015) compared to the most recent fiscal year in which no POStPlan implementation activities had begun to occur (2011). We found that revenue at RMPOs in fiscal year 2015 was 29 percent lower than revenue, adjusted for inflation, in fiscal year 2011, with over a 50 percent decline in Level 2 RMPOs. See table 1. While our analysis shows that revenue at the POStPlan RMPOs declined by 29 percent, this revenue constituted a small portion of the total revenue from all of USPS’s post offices. In January and February of 2016, USPS conducted additional analysis comparing fiscal years 2011 and 2015 post office walk-in revenue. According to this analysis, revenue from RMPOs in fiscal year 2011 accounted for just 4.5 percent of approximately $11.9 billion in total revenue earned from post offices that year and, in fiscal year 2015, 3.7 percent of approximately $10.8 billion in total revenue. Additionally, USPS’s analysis showed that the Level 18 or above administrative offices experienced less of a decline in revenue than the RMPOs they remotely manage. Specifically, revenue at these offices in fiscal year 2011 was about $2.32 billion (adjusted for inflation) and, in fiscal year 2015, about $2.06 billion, a decline of about 11.2 percent. In its analysis, USPS also reported total revenue from all non-POStPlan offices. However, USPS’s reported total again included the Level 6 PTPOs in this category. Overall, revenue at all post offices declined by about 14.6 percent from fiscal years 2011 to 2015 when fiscal year 2011 revenue is adjusted for inflation. While both our and USPS’s analyses comparing fiscal year 2011 and 2015, and USPS’s analysis of changes from fiscal years 2014 to 2015 help to illustrate the potential effects of POStPlan on revenue, they do not fully measure it. In particular, analyzing the extent of revenue reductions that are independently due to POStPlan would require a more complex analysis that takes into account a variety of factors, and the USPS data available to us were not adequate to conduct such an analysis. For example, in addition to considering changes in revenue at POStPlan post offices by level, other factors need to also be considered, such as revenue changes in non-POStPlan offices and other retail channels within a reasonable distance to POStPlan offices, as well as at offices and channels not near POStPlan offices. Such an analysis would also need to consider other factors that may influence retail revenue over time. These factors could include, for example, the state of the general economy, the adoption of technology substitutes to traditional mail (such as e-mail, e- retail, and electronic bill payments), and relevant demographic characteristics that might affect mail volume, such as population density and household income. Such an analysis would also need to consider the movement of customer traffic to alternate ways of accessing postal services. For instance, in fiscal year 2015, about 46 percent of USPS’s total retail revenue of about $19 billion was generated through these alternate access channels, which include usps.com, self-service kiosks, and third-party retail partners. In the case of POStPlan, USPS officials explained that since revenue from POStPlan post offices accounts for a small portion of total post office revenue and cost reductions due to POStPlan were expected to be much larger, cost savings due to POStPlan would likely outweigh lost revenue. However, analyzing the extent of revenue reductions that are independently due to POStPlan through a more complex analysis could be helpful in evaluating the overall impact of POStPlan if USPS expanded the initiative to additional post offices, as may occur due to the workload re-evaluations that USPS plans to conduct. Overall, USPS officials have acknowledged that their original POStPlan cost-savings estimate was not sophisticated—characterizing it as a rough estimate that used a “quick and dirty” approach—and have also acknowledged the limitations of their estimate of the arbitration decision's impact on cost savings. Prior to making any changes (like POStPlan) in the nature of postal services that are at least substantially nationwide in scope, USPS must request an advisory opinion from PRC on the change. USPS officials explained that this process entails a review of the proposed initiative by PRC and that when making their case before PRC, USPS’s legal counsel makes recommendations on strategy for the proceeding in consultation with other USPS staff. They further noted that in order to make an informed business decision prior to undertaking an initiative such as POStPlan, USPS undertakes reasonable efforts to appropriately assess the expected cost savings to determine whether the initiative is worth pursuing. The officials added that the nature and extent of this assessment varies by the specific circumstances, particularly, the financial circumstances facing USPS, the need for expedited implementation of an initiative, and USPS’s overall confidence that an initiative will prudently reduce costs. USPS officials stated that in cases such as POStPlan, there is no strict guidance or thresholds that govern when cost-savings estimates should be rigorous versus when it is sufficient to use a less rigorous approach to gain a rough approximation, and there is no legal requirement to produce cost-savings estimates or to use a particular methodology. Instead, USPS officials said these are judgmental decisions. Regarding USPS’s calculations of actual savings achieved, USPS officials have also acknowledged the limitations of the underlying salaries and benefits data. For example, USPS officials acknowledged that the errors we found in these data would result in some offices’ salaries and benefits being understated, and others being overstated. In February 2016, USPS officials told us that they were not previously aware of this issue and that they have begun to take steps to further understand the scope of the errors and how and why they occurred. As of March 14, 2016, USPS officials were continuing to assess this issue, but USPS’s time frame for identifying the scope and resolving the issue remains unclear, and it is also unclear if USPS subsequently intends to update its calculations of actual savings achieved. Regarding its analysis of changes in revenue from fiscal year 2014 to 2015, after reviewing our analysis of revenue at POStPlan post offices, USPS has also acknowledged that some PTPOs should have been included in its analysis and provided details on why it included these offices and the Level 18 and above administrative offices in the categories that it did. In particular, USPS officials told us that they agreed that some of the PTPOs should have been included in their analysis as POStPlan post offices and explained that they had included these offices in their analysis as non-POStPlan offices because this type of office existed prior to POStPlan. They also noted that they included the Level 18 and above administrative offices as POStPlan post offices because, as noted above, those would be the offices most likely to absorb customers who are looking for nearby alternatives in the face of reduced hours at their local post office. USPS officials also said that it is important to note that revenue declines at POStPlan post offices may not be fully lost to USPS because customers may use other nearby retail channels (e.g., the Level 18 or above offices, usps.com, etc.) instead. While we agree that ultimately, it is the revenue lost to USPS as a whole that is most relevant to USPS, it is still important to accurately represent the changes in revenue at the reduced-hour offices to fully understand the effects of POStPlan on these offices and the trade-offs necessary between costs and benefits, and to provide relevant information for program evaluation and future decision making. We have long reported that USPS needs to restructure its operations to better reflect customers’ changing use of the mail and to align its costs with revenues. Toward this end, USPS has proposed or started a number of initiatives, such as POStPlan, to increase efficiency and reduce costs as it seeks to improve its financial viability. Having reliable data and quality methods for calculating the potential savings USPS expects to achieve through these initiatives, the actual savings they achieve, and the potential effects they have on revenue are critical. Such rigor can help ensure that USPS officials and oversight bodies, such as PRC and Congress, have accurate and relevant information to help USPS strike the right balance between the costs and benefits of the various initiatives. Although POStPlan was an initiative that affected about 66 percent of USPS’s post offices and postmasters, USPS did not produce cost- savings estimates with the level of rigor that an initiative with such a large footprint may have warranted. Having reliable estimates of expected cost savings when initially making decisions could help ensure that USPS is achieving its goals, yet USPS’s estimates of expected savings had limitations. For example, by not conducting a sensitivity analysis, as recommended by our cost-estimating guidance, USPS may have missed an opportunity to test how vulnerable its expected cost savings were to program changes. For instance, USPS may have been able to test how its expected savings would change should any of its assumptions change, as some later did because of the arbitration decision, which affected staffing arrangements at the majority of POStPlan post offices. If USPS had noticed significant differences in its projected labor costs and savings through a sensitivity analysis, it might have taken steps to address these vulnerabilities prior to announcing POStPlan. USPS believes that, given likely savings and the realities of postal operations, moving forward with POStPlan was the correct operational decision. However, for future initiatives like POStPlan, having guidance that clarifies when USPS should develop cost-savings estimates using a rigorous approach could help ensure that USPS produces estimates that thoroughly consider the scope of a program’s implications, effects, and alternatives. Such an approach is particularly relevant given that USPS has projected unsustainable losses through fiscal year 2020 and beyond, may continue to develop efficiency and cost-savings initiatives, and will need quality information on the potential savings and effects associated with these initiatives. Further, according to USPS as of October 2015, it has saved $306 million in labor costs from fiscal year 2012 to June 2015 as a result of POStPlan. While we recognize that POStPlan most likely resulted in some cost savings, the accuracy of USPS’s calculation of savings may be limited by errors we found in USPS’s salaries and benefits data, and thus, it is unclear whether USPS may have actually saved more or less. USPS’s time frames for assessing and resolving this issue—and whether it intends to, subsequently, update its calculations of actual savings achieved—are also unclear. Finally, in its estimates of expected savings, USPS did not initially consider the effect that reduced retail hours may have on revenue and thus did not calculate an estimate of net cost savings. This means USPS had an incomplete picture of the effects of POStPlan. Even the preliminary analysis of changes in revenue that USPS later conducted was limited because it was not consistent with USPS’s definition of what constitutes POStPlan post offices. Improving the quality of future POStPlan revenue analyses, especially as the program potentially expands to additional offices, could help USPS better understand the implications of POStPlan and inform future decision- making as USPS conducts workload re-evaluations of post offices. The Postmaster General should direct executive leaders to: establish guidance that clarifies when USPS should develop cost- savings estimates using a rigorous approach that includes, for example, a sensitivity analysis and consideration of other factors that could affect net costs and savings, versus when it is sufficient to develop a rough estimate; continue to take steps to assess and resolve the salaries and benefits data errors and, subsequently, update calculations of actual cost savings achieved due to POStPlan as appropriate; and verify that calculations of changes in revenue at POStPlan post offices in USPS’s revenue analyses are consistent with USPS’s definition of POStPlan post offices and take steps to consider when it may be appropriate to develop an approach for these analyses that will allow USPS to more fully consider the effects of POStPlan on retail revenue across USPS. We provided a draft of this report to PRC and USPS for their review and comment. PRC provided comments in an e-mail and stated that it found the report accurately reflects PRC’s advisory opinion and actions regarding POStPlan. USPS provided a written response, which is reproduced in appendix II of this report. In the written response, USPS disagreed with the overall tone and title of our report, provided observations on our recommendations but did not state whether it agreed or disagreed with them, and disagreed with some of the specific examples we use in our report. Regarding the tone and title of our report, in its response USPS reported that it does not see a basis for any conclusion other than that, with POStPlan, it is saving substantial amounts from the reduction in work hours and the use of lower cost labor. It further stated that POStPlan was a reasonable initiative in light of declining mail transactions and the need to right-size its infrastructure to support the retail needs of the country. Finally, USPS said that it believes POStPlan was and remains a prudent business decision. Our report does not comment directly on the reasonableness of the POStPlan initiative or whether it was a prudent business decision, but we note in our report that USPS believed POStPlan was a proper operational decision for USPS and its stakeholders. Instead, our report focuses on USPS’s estimates of savings due to POStPlan. We do not disagree that POStPlan most likely resulted in some savings due to reduced work hours and have clarified our report to state such. However, as we mention in the report, USPS’s calculations of the actual savings achieved may be limited by errors in USPS’s salaries and benefits data, and thus, USPS may have understated or overstated the amount it has saved. We also revised the title of the report in response to USPS’s concern. Regarding our first recommendation that USPS establish guidance that clarifies when USPS should develop cost-savings estimates using a rigorous approach versus when it is sufficient to develop a rough estimate, USPS said that it performed the level of analysis necessary to support the decision to move forward with POStPlan and that there is not a concrete set of business rules that determine the level of analysis that should be conducted. Instead, USPS noted that its management intends to be guided by a variety of factors, on a case-by-case basis. These factors include: (1) the cost associated with the development of rigorous financial information, (2) whether savings are the sole factor motivating the decision, and (3) the amount of time that must be committed to performing detailed analysis, among other things. USPS added that decisions based on more complex operational changes and risk may require more detailed analysis. While we appreciate that there is value to considering the types of analyses to perform on a case-by-case basis, the factors that USPS lists in its written response are precisely the type of factors that could be included (or expanded upon) in guidance that clarifies how to make those case-by-case decisions. Additionally, as we note in our report, we believe such guidance will be helpful to USPS and its oversight bodies as it considers future initiatives. As such, we continue to believe our recommendation is appropriate. Regarding our second recommendation that USPS continue to assess and resolve errors in its salaries and benefits data and, as appropriate, update its calculations of actual savings achieved due to POStPlan, USPS said that it did not rely on this type of data in its original estimate of expected cost savings. We recognize that USPS did not rely on these data in that estimate. Instead, our report mentions that such data affected USPS’s post-arbitration decision estimate of expected savings and were used to calculate actual savings achieved thus far. Regarding the latter, USPS noted in its written response that due to system limitations, it cannot change past, existing data, but that it will continue to identify and rectify the causes of the data anomalies. USPS also noted that as more detailed information may be necessary in the future, it is reviewing possible future system or process improvement opportunities. These are positive steps to ensure that USPS is addressing these data issues and reviewing opportunities for future improvements. Regarding our third recommendation that USPS (1) verify that calculations of changes in revenue at POStPlan post offices in its revenue analyses are consistent with USPS’s definition of POStPlan post offices and (2) take steps to consider when it may be appropriate to develop an approach that more fully considers the effects of POStPlan on revenue across USPS, USPS did not directly address either part of this recommendation. Instead, USPS provided information on revenue at POStPlan post offices in 2011 and 2015 (such as the portion of total walk- in revenue these offices constituted), much of which is included in our report. USPS also re-iterated that it expected revenue would shift from POStPlan post offices to the Level 18 and above offices that remotely manage the POStPlan offices, and noted that USPS’s revenue analysis supports that assumption. The intent of our recommendation was not to disagree with this assumption. Rather, the intent of our recommendation is to help ensure that USPS and its oversight bodies have quality information on the changes in revenue at POStPlan post offices in order to fully understand the effects of POStPlan. Key to having such information is ensuring that the calculations of changes in revenue are consistent with USPS’s definition of what constitutes a “POStPlan post office.” As such, we continue to believe that verifying the accuracy of its calculations is important. Additionally, our report acknowledges the small portion of total walk-in revenue that POStPlan post offices constitute, and notes that a more complex analysis could be helpful if USPS expanded the initiative to additional offices, as may occur due to the workload re- evaluations that USPS plans to conduct. We therefore continue to believe that USPS should take steps to consider at what point such an analysis may be warranted. Finally, USPS disagreed with some of the specific examples we use in our report. In particular: USPS disagreed with an example showing that its original cost- savings estimate was incomplete due to the omission of costs associated with separation incentives offered to postmasters, noting that “annualized savings” estimates are generally not reduced by such start-up costs. We do not disagree that annualized savings are one way to measure cost savings. However, as we note in our report, OMB cost-estimating guidance states that agencies should also take into account the costs incurred to implement an activity, suggesting that it is the net cost savings that should be considered. As such, a fully complete cost-savings estimate would consider such start-up costs. Similarly, USPS disagreed in another instance that showed the saved salary USPS authorized to postmasters contributed to the incompleteness of its original estimate and noted that these salary payments were not planned at the inception of the program. We have updated our report to reflect that these payments were not planned. Finally, USPS disagreed with statements showing that the change made to staffing in Level 18 post offices as a result of the POSPlan arbitration decision is tied to POStPlan, noting that this change was related to a separate grievance and that this separate grievance was specifically identified in a footnote in the POStPlan arbitration decision. We do not disagree with the idea that this change was a resolution of a separate grievance and that the footnote USPS refers to cites this separate grievance. However, we disagree that the change was not at all tied to POStPlan. The connection to POStPlan is clear in the arbitration decision’s wording. Specifically, in the arbitration decision, the arbitrator ruled that Level 4 RMPOs should be staffed by PSEs. When stating its ruling regarding the staffing change in Level 18 offices, the arbitration decision clearly states, “In view of the increased use of PSEs in Level 4 RMPOs …. I further order that all Level 18 post offices that are currently staffed by PSEs with the designation code 81-8 will now be staffed with a career employee.” Therefore, it is clear that changes in staffing at Level 4 RMPOs (which were part of POStPlan) also affected the resolution of this separate dispute. We are sending copies of this report to the appropriate congressional committees, the Postmaster General, the Acting Chairman of PRC, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 202-512-2834 or rectanusl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report examines (1) the actions the U.S. Postal Service (USPS) took to implement the Post Office Structure Plan (POStPlan) before the September 2014 arbitration decision and the savings USPS estimated POStPlan would achieve, (2) the effect USPS determined the arbitration decision had on POStPlan staffing and cost savings, and (3) whether USPS’s POStPlan cost-savings estimates are reliable and any limitations of the estimates. To describe the POStPlan initiative, determine the actions USPS took to implement it before the September 2014 arbitration decision and identify the effects USPS determined the decision had on POStPlan staffing, we reviewed relevant laws, regulations, documentation and data, and conducted interviews. Specifically, we reviewed USPS guidance, policies, procedures, and other documents related to POStPlan planning and implementation, such as fact sheets, employee notification letters, and information submitted during the Postal Regulatory Commission’s (PRC) 2012 POStPlan proceeding. We reviewed USPS’s 2014 and 2015 annual reports to Congress and 2013 Five-Year Business Plan. We also reviewed documentation related to the arbitration in particular, such as the arbitration decision, subsequent memorandums of understanding between USPS and the American Postal Workers Union (APWU) that further implemented the decision, and the 2010-2015 collective bargaining agreement between USPS and APWU. We obtained written responses and data from USPS officials on the arbitration decision and POStPlan implementation from 2012 to 2015, such as data on the number of post offices where USPS reduced hours from 2012 to 2015 and postmasters affected by POStPlan. We assessed the reliability of these data by comparing them to other information obtained from USPS and asking USPS questions about data sources, quality, and timeliness. We found these data reliable for the purpose of describing the progress and status of POStPlan before and after the arbitration decision. We also reviewed prior GAO reports and documentation from USPS stakeholders, including PRC and USPS’s two postmaster associations—the National Association of Postmasters of the United States (NAPUS) and the National League of Postmasters of the United States (NLPM). For example, we reviewed PRC’s advisory opinion on POStPlan and the transcript of PRC’s POStPlan hearing, which it held on July 11, 2012. We selected NAPUS and NLPM due to their role as management associations that USPS must consult with and because they represent POStPlan-affected postmasters. We selected PRC due its oversight role over USPS. We interviewed USPS officials and NAPUS, NLPM, and PRC officials to obtain additional information, views, and context on POStPlan. We also contacted APWU, but APWU officials did not accept our invitation for a meeting. To determine the cost savings USPS originally estimated it would achieve through POStPlan, the effect it estimated the arbitration decision had on savings, and the reliability and limitations of these estimates, we reviewed USPS’s POStPlan cost-savings estimates and compared the estimates to relevant criteria. Specifically, we reviewed USPS’s 2012 estimate of the savings it expected to achieve through POStPlan and its 2015 estimate of the arbitration decision’s impact on expected cost savings. We obtained USPS documentation and written responses related to POStPlan cost savings, interviewed USPS officials, and obtained documentation and interviewed officials from NAPUS, NLPM, and PRC to determine how USPS developed its estimates, the assumptions it used, the potential sources of uncertainty, the types of inputs included and omitted, and these stakeholders’ views. We then assessed the reliability and soundness of these estimates using guidance on assessing the reliability of data (which are defined as including estimates—such as estimates of cost savings—and projections), cost estimating guidance, and internal controls standards adopted by USPS to determine the extent to which the estimates comported with these criteria. We reviewed these standards and guidance and then selected those practices that, in our professional judgment, were most applicable given that POStPlan is an efficiency and cost-savings initiative and given USPS’s financial condition. In particular, we assessed the estimates’ accuracy, validity, completeness, and consistency; any use of sensitivity analyses; and consideration of net cost-savings factors. We discuss the limitations of the estimates in this report. We also obtained USPS data on actual cost savings achieved from fiscal year 2012 to June 2015 (the most recent data available at the time of our review) due to POStPlan, and hourly pay rates in POStPlan post offices under the pre- and post-arbitration decision POStPlan staffing arrangements. We assessed the reliability of these data by comparing them to other information obtained from USPS and asking USPS officials questions about data sources, quality, and timeliness, and, for the actual savings data, reviewing how consistently USPS’s data files followed the methodology USPS officials described to us. Regarding the actual savings data, we found that USPS’s data files when USPS first began tracking savings did not always follow the methodology USPS described to us. While USPS officials did not provide explanations for these inconsistencies, USPS updated its methodology for tracking POStPlan cost savings beginning in fiscal year 2015. However, we also found errors in the salaries and benefits data USPS used to calculate actual savings achieved; we discuss the limitations in this report. Regarding the hourly pay-rate data, we found these data reliable for the purpose of describing hourly pay rates in POStPlan post offices according to USPS. It was beyond the scope of our review to assess whether POStPlan was a prudent business decision. Finally, to better understand the potential effects of POStPlan and the arbitration decision, we analyzed (1) salaries and benefits paid, and (2) the walk-in revenue earned at POStPlan post offices, by post office level, for periods before and after POStPlan implementation. We used data provided by USPS, as follows: Salaries and benefits data: USPS provided us data on the salaries and benefits it paid to POStPlan employees in POStPlan post offices in the third quarter of fiscal year 2011 (i.e., April, May, and June 2011). According to USPS officials, these data represented all salaries and benefits paid to all relevant employees during that period. USPS provided us the same information for the third quarter of fiscal year 2015. To make the fiscal year 2011 data comparable to the fiscal year 2015 data, we adjusted the fiscal year 2011 salaries and benefits using adjustment factors provided by USPS officials. Revenue data: USPS provided us data on the revenue in POStPlan post offices in fiscal years 2011 and 2015. We adjusted fiscal year 2011 dollars using the Gross Domestic Product deflator so that they would be stated in 2015 dollars. Office level classification data: USPS provided us data on what level each POStPlan post office is classified as of October 2015 (i.e., whether it is a Level 2, 4, or 6 remotely managed post office (RMPO) or part time post office (PTPO)). Although USPS officials stated that these data provided included all POStPlan post offices, we found that they did not always include information for the same set of offices, and when providing these data, USPS officials did not provide explanations for why the number of POStPlan post offices differed. As such, regarding our revenue analysis, we excluded offices as necessary in order to have as complete a set of information as possible for as many offices as possible with what was provided. Specifically, of those offices for which we had level information, we excluded those for which we did not have revenue data for both periods. In particular, USPS’s data did not include complete information on revenue in both periods at the majority of the about 400 Level 6 PTPOs. Thus, we excluded the Level 6 PTPOs from our results. We also excluded one Level 6 RMPO for this reason. Additionally, we excluded four offices that had multiple level classifications. Of those four, three were classified as both Levels 4 and 18, and one was classified as both Levels 6 and 18. Despite these exclusions, we found these data reliable for the purpose of describing changes in revenue at POStPlan post offices. Regarding our salaries and benefits analysis, in analyzing USPS’s salaries and benefits data, we found that these data were not reliable due to errors in how USPS recorded the hours its employees worked. In addition to the individual named above, key contributors to this report were Derrick Collins (Assistant Director), Amy Abramowitz, Lilia Chaidez, William Colwell, Marcia Fernandez, SaraAnn Moessbauer, Nalylee Padilla, Malika Rice, Michelle Weathers, and Crystal Wesco.
USPS continues to experience a financial crisis and has undertaken many initiatives to reduce costs. In May 2012, USPS announced POStPlan, which aimed to reduce retail hours at post offices and use less costly labor. However, an arbitrator ruled in September 2014 that USPS must reverse several of these staffing changes. GAO was asked to review the arbitration decision's effects on POStPlan staffing and cost savings. GAO examined: (1) USPS's actions to implement POStPlan before the decision and expected savings, (2) the decision's effects on POStPlan's staffing and savings, and (3) whether USPS's POStPlan cost-savings estimates are reliable. GAO reviewed relevant POStPlan documentation and data; compared USPS's POStPlan cost-savings estimating process to GAO's data reliability and cost- estimating guidance and internal control standards adopted by USPS; and interviewed officials from USPS, its regulatory body, and postmaster associations. The U.S. Postal Service (USPS) had largely completed Post Office Structure Plan's (POStPlan) implementation prior to a 2014 POStPlan arbitration decision and expected millions in cost savings. Specifically, under POStPlan, USPS planned to reduce hours at about 13,000 post offices (from 8- to 2-, 4-, or 6-hours of retail service a day) and to staff them with employees less costly than postmasters. Prior to the arbitration decision, USPS had reduced hours at most of these offices and taken steps to make the staffing changes. For example, it replaced many career postmasters with non-career or part-time employees by offering separation incentives or reassignments. In July 2012, USPS estimated POStPlan would result in about $500 million in annual cost savings. USPS determined that, while the 2014 arbitration decision significantly affected planned staffing at POStPlan post offices and estimated savings, POStPlan was the correct operational decision for USPS and its stakeholders. The arbitrator ruled that many offices be staffed by bargaining-unit employees, such as clerks, rather than the generally less costly employees USPS had planned to use. As a result, USPS estimated in June 2015 that POStPlan would now result in annual savings of about $337 million or 35 percent less than the about $500 million it expected. USPS's original and post-arbitration decision estimates of expected POStPlan cost savings have limitations that affect their reliability. USPS officials noted that they do not have strict guidance on when a rough savings estimate is adequate versus when a more rigorous analysis is appropriate. Specific limitations include: imprecise and incomplete labor costs, including errors in underlying data; lack of a sensitivity review; and the exclusion of other factors that affect net cost savings, particularly the potential impact of reduced retail hours on revenue. For example, USPS's post-arbitration-decision estimate relies, in part, on its calculations of actual savings achieved due to POStPlan. While POStPlan most likely resulted in some savings, GAO found errors in the underlying salaries and benefits data used that may understate or overstate the amount of savings achieved. Additionally, while USPS later (i.e., after it developed its savings estimates) conducted analyses of changes in revenue, GAO found these analyses were limited because USPS's calculations of changes in revenue at POStPlan and non-POStPlan post offices were inconsistent with its definition of what constitutes a POStPlan office. As of March 2016, USPS was taking steps to understand the scope and origin of the errors in its salaries and benefits data, but its time frame for resolving the issue remains unclear, as does whether USPS subsequently intends to update its calculations of actual savings achieved. Internal control standards state that program managers and decision makers need quality data and information to determine whether they are meeting their goals. Without reliable data and quality methods for calculating the potential savings USPS expects to achieve through its initiatives, the actual savings they achieve, and the effects on revenue, USPS officials and oversight bodies may lack accurate and relevant information with which to make informed decisions regarding future cost-saving efforts in a time of constrained resources. To ensure that USPS has quality information regarding POStPlan, GAO recommends that USPS establish guidance that clarifies when to develop savings estimates using a rigorous approach; resolve errors in labor data and, as appropriate, recalculate actual savings achieved; and take steps to improve revenue analyses. USPS disagreed with some of GAO's findings but neither agreed or disagreed with the recommendations. GAO continues to believe its recommendations are valid as discussed further in this report.
While the majority of businesses pay the taxes withheld from employees’ salaries as well as the employer’s matching amounts, a significant number of businesses do not. Our review of IRS tax records showed that over 1.6 million businesses owed over $58 billion in unpaid payroll taxes to the federal government as of September 30, 2007, and over 100,000 businesses currently owe for more than 2 years (8 quarters) of payroll taxes. This total includes amounts earned by employees that were withheld from their salaries to satisfy their tax obligations, as well as the employer’s matching amounts, but which the business diverted for other purposes. Many of these businesses repeatedly failed to remit amounts withheld from employees’ salaries. For example, 70 percent of all unpaid payroll taxes are owed by businesses with more than a year (4 tax quarters) of unpaid payroll taxes, and over a quarter of unpaid payroll taxes are owed by businesses that have tax debt for more than 3 years (12 tax quarters). Figure 1 shows the total dollar amount of payroll tax debt summarized by the number of unpaid payroll tax quarters outstanding. Using IRS’s database of unpaid taxes, we were able to identify many of the industry types associated with businesses owing payroll taxes. The top industries with unpaid payroll tax debt included construction ($8.6 billion), professional services ($4.4 billion), and healthcare ($4 billion). When businesses fail to remit taxes withheld from employees’ salaries, the payroll tax receipts are less than the payroll taxes due, and the Social Security and Hospital Insurance Trust Funds have fewer financial resources available to cover current and future benefit payments. However, the trust funds are funded based on wage estimates and not actual payroll tax collections. Therefore, the General Fund transfers to the trust funds amounts that should be collected but are not necessarily collected, resulting in the General Fund subsidizing the trust funds for amounts IRS is unable to collect in payroll taxes from employers. As of November 1, 2007, IRS estimated that the amount of unpaid taxes and interest attributable to Social Security and Hospital Insurance taxes in IRS’s $282 billion unpaid assessments balance was approximately $44 billion. This estimate represents a snapshot of the amount that needed to be provided to the Social Security and Hospital Insurance Trust Funds based on the outstanding payroll tax debt on IRS’s books at the time. It does not include an estimate for tax debts that have been written off of IRS’s tax records in previous years because of the expiration of the statutory collection period. Recent IRS data indicate that the cumulative shortfall increases by an additional $2 billion to $4 billion annually because of uncollected payroll taxes. Although IRS has taken a number of steps to improve collections by prioritizing cases with better potential for collectibility, the collection of payroll taxes remains a significant problem for IRS. From 1998, when we performed our last in-depth review of payroll taxes, to September 2007, we found that while the number of businesses with payroll tax debt decreased from 1.8 million to 1.6 million, the balance of outstanding payroll taxes in IRS’s inventory of tax debt increased from about $49 billion to $58 billion. Our analysis of the unpaid payroll tax inventory shows that the number of businesses with more than 20 quarters of tax debt (5 years of unpaid payroll tax debt) almost doubled between 1998 and 2007. The number of businesses that had not paid payroll taxes for over 40 quarters (10 years or more) also almost doubled, from 86 businesses to 169 businesses. These figures are shown in table 1. Of the $58 billion in unpaid payroll taxes as of September 30, 2007, IRS categorized about $4 billion (7 percent) as going through IRS’s initial notification process. Because IRS has made the collection of payroll taxes one of its highest priorities, once a case completes the notification process, it is generally sent to IRS’s field collections staff for face-to-face collection action. However, IRS does not have sufficient resources to immediately begin collection actions against all of its high-priority cases. As a result, IRS holds a large number of cases in a queue awaiting assignment to a revenue officer in the field. About $7 billion (12 percent) of the unpaid payroll tax amount was being worked on by IRS revenue officers for collection, and about $9 billion (16 percent) was in a queue awaiting assignment for collection action. Most of the unpaid payroll tax inventory—$30 billion (52 percent)—was classified as currently uncollectible by IRS. IRS classifies tax debt cases as currently not collectible for several reasons, including (1) the business owing the taxes is defunct, (2) the business is insolvent after bankruptcy, or (3) the business is experiencing financial hardship. Of those unpaid payroll tax cases IRS has classified as currently not collectible, almost 70 percent were as a result of a business being defunct. Much of the unpaid payroll tax debt has been outstanding for several years. As reflected in figure 2, our analysis of IRS records shows that over 60 percent of the unpaid payroll taxes was owed for tax periods from 2002 and prior years. Prompt collection action is vital because, as our previous work has shown, as unpaid taxes age, the likelihood of collecting all or a portion of the amount owed decreases. Further, the continued accrual of interest and penalties on the outstanding federal taxes can, over time, eclipse the original tax obligation. Additionally, as discussed previously, IRS is statutorily limited in the length of time it has to collect unpaid taxes— generally 10 years from the date the tax debt is assessed. Once that statutory period expires, IRS can no longer attempt to collect the tax. IRS records indicate that over $4 billion of unpaid payroll taxes will expire in each of the next several years because of the expiration of their statutory collection period. Our audit of payroll tax cases identified several issues that adversely affect IRS’s ability to prevent the accumulation of unpaid payroll taxes and to collect these taxes. Foremost is that IRS’s approach focuses on getting businesses—even those with dozens of quarters of payroll tax debt—to voluntarily comply. We found that IRS often either did not use certain collection tools, such as liens or TFRPs, or did not use them timely, and that IRS’s approach does not treat the business’s unpaid payroll taxes and responsible party’s penalty assessments as a single collection effort. Additionally, although unpaid payroll taxes is one of its top collection priorities, IRS did not have performance measures to evaluate the collection of unpaid payroll taxes or the related TFRP assessments. Finally, we found some state revenue agencies are using tools to collect or prevent the further accumulation of unpaid taxes that IRS is either legally precluded from using or that it has not yet developed. We have previously reported that IRS subordinates the use of some of its collection tools in order to seek voluntary compliance and that IRS’s repeated attempts to gain voluntary compliance often results in minimal or no actual collections. Our audit of businesses with payroll tax debt and our analysis of businesses with multiple quarters of unpaid payroll taxes again found revenue officers continuing to work with a business to gain voluntary compliance while the business continued to accumulate unpaid payroll taxes. For example, our analysis of IRS’s inventory of unpaid payroll taxes found that over 10,000 businesses owed payroll taxes for 20 or more quarters—5 years or more. Failing to take more aggressive collection actions against businesses that repeatedly fail to remit payroll taxes has a broader impact than on just a single business. If left to accumulate unpaid payroll taxes, businesses can gain an unfair business advantage over their competitors at the expense of the government. As we have found previously, in at least one of our case study businesses, IRS determined that the non-compliant business obtained contracts through its ability to undercut competitors in part because the business’s reduced costs associated with its non-payment of payroll taxes. Similarly, in another case the revenue officer noted that the business was underbidding on contracts and was using unpaid payroll taxes to offset the business’s losses. Failure to take prompt actions to prevent the further accumulation of unpaid payroll taxes can also have a detrimental impact on the business and the associated owners/officers. As we have reported in the past, non- compliant businesses can accumulate substantial unpaid taxes as well as associated interest and penalties. Over time, these unpaid balances may compound beyond the business’s ability to pay—ultimately placing the business and responsible officers in greater financial jeopardy. IRS is legally precluded from taking collection actions during certain periods, such as when a tax debtor is involved in bankruptcy proceedings. During those periods, even though IRS may not be able to take collection actions, tax debtors may continue to accumulate additional tax debt. However, IRS’s focus on voluntary compliance has negatively affected IRS’s collection efforts for years. Our current findings on IRS’s focus on voluntary compliance are similar to those of a study performed by the Treasury Inspector General for Tax Administration (TIGTA) 8 years ago. In that study, TIGTA found that revenue officers were focused on IRS’s customer service goals and therefore were reluctant to take enforcement actions. In another study performed 3 years ago, TIGTA reported that IRS allowed tax debtors to continue to delay taking action on their tax debt by failing to take aggressive collection actions. TIGTA found that IRS did not take timely follow-up action in half of the cases for which tax debtors missed specific deadlines. One official from a state taxing authority told us that the state benefited from IRS’s approach because it allowed the state to collect its unpaid taxes from business tax debtors before IRS. In one of our case study businesses, although IRS successfully levied some financial assets, a mortgage holder and state and local officials seized the business’s assets to satisfy the business’s debts. IRS has recently strengthened its procedures to include some specific steps for dealing with businesses that repeatedly fail to remit payroll taxes and to stress the importance of preventing the further accumulation of such payroll taxes. We found that for payroll tax debt, one of IRS’s highest collection priorities, IRS does not always file tax liens to protect the government’s interest in property, and when IRS does so, it does not always do so timely. Our analysis of IRS’s inventory of unpaid payroll taxes as of September 30, 2007, found that IRS had not filed liens on over one-third of all businesses with payroll tax debt cases assigned to the field for collection efforts—over 140,000 businesses. IRS guidance states that filing a lien is extremely important to protect the interests of the federal government, creditors, and taxpayers in general, and that the failure to file and properly record a federal tax lien may jeopardize the federal government’s priority right against other creditors. A 2005 IRS study of TFRP cases found that cases where a lien had been filed had more average payments—about a third more—than where a lien had not been filed. Failure to file a lien can have a negative impact on tax collections. For example, IRS assessed the business owner in one of our case studies a TFRP to hold the owner personally liable for the withheld payroll taxes owed by the business. However, IRS did not assign the assessment to a revenue officer for collection and thus did not file a lien on the owner’s property. Because there was no lien filed, the owner was able to sell a vacation home in Florida, and IRS did not collect any of the unpaid taxes from the proceeds of the sale. As in the case above, IRS’s case assignment policy can delay the filing of liens for payroll tax cases. Because payroll tax cases are one of IRS’s top collection priorities, once the notification process is complete, IRS routes these cases to revenue officers for face-to-face collection action instead of being routed to the Automated Collection System (ACS) for telephone contact. However, IRS generally places cases in a queue of cases awaiting assignment until a revenue officer is available to work the cases. Cases can be in the queue for extended periods of time awaiting assignment to a revenue officer. For the period that a case is in the queue, revenue officers are not assigned to file liens and take other collection actions. Our analysis found that for all payroll tax cases in the queue awaiting assignment as of September 30, 2007, over 80 percent did not have a lien filed. As a result, lower priority tax cases that go through the ACS process may have liens filed faster than the higher priority payroll tax cases. IRS has a powerful tool to hold responsible owners and officers personally liable for unpaid payroll taxes through assessing a TFRP. However, we found that IRS often takes a long time to determine whether to hold the owners/officers of businesses personally liable and, once the decision is made, to actually assess penalties against them for the taxes. In reviewing a sample of TFRP assessments selected as part of our audit of IRS’s fiscal year 2007 financial statements, we found that from the time the tax debt was assessed against the business, IRS took over 2 years, on average, to assess a TFRP against the business owners/officers. We found that revenue officers, once assigned to a payroll tax case, took an average of over 40 weeks to decide whether to pursue a TFRP against business owners/officers and an additional 40 weeks on average to formally assess the TFRP. For 5 of the 76 sampled cases, we found that IRS took over 4 years to assess the TFRP. We did not attempt to identify how frequently IRS assesses a TFRP against responsible owners/officers. However, in TIGTA’s 2005 report on its review of IRS’s collection field function, it noted that revenue officers did not begin the TFRP process in over a quarter of the cases it reviewed. The timely assessment of TFRPs is an important tool in IRS’s ability to prevent the continued accumulation of unpaid payroll taxes and to collect these taxes. Once a TFRP is assessed, IRS can take action against both the owners/officers and the business to collect the withheld taxes. For egregious cases, such as some of those in our case studies, taking strong collection actions against the owners’ personal assets may be the best way to either get the business to become tax compliant or to convince the owners to close the non-compliant business, thus preventing the further accumulation of unpaid taxes. Failure to timely assess a TFRP can result in businesses continuing to accumulate unpaid payroll taxes and lost opportunities to collect these taxes from the owners/officers of the businesses. For example, one business we reviewed had tax debt from 2000, but IRS did not assess a TFRP against the business’s owner until the end of 2004. In the meantime, the owner was drawing an annual salary of about $300,000 and had sold property valued at over $800,000. Within 1 month of IRS’s assessing the TFRP, the owner closed the business, which by then had accumulated about $3 million in unpaid taxes. In September 2007, IRS implemented new requirements to address the timeliness of TFRP assessments. Under the new policy, IRS is now requiring revenue officers to make the determination on whether to pursue a TFRP within 120 days of the case’s being assigned and to complete the assessment within 120 days of the determination. However, the revised policy maintains a provision that allows the revenue officer to delay the TFRP determination. Additionally, the policy does not include a requirement for IRS to monitor the new standards for assessing TFRPs. IRS assigns a higher priority to collection efforts against the business with unpaid payroll taxes than against the business’s responsible owners/officers. Further, it treats the TFRP assessments as a separate collection effort unrelated to the business tax debt, even though the business payroll tax liabilities and the TFRP assessments are essentially the same tax debt. As a result, once the revenue officer assigned to the business payroll tax case decides to pursue a TFRP against the responsible owners/officers, the TFRP case does not automatically remain with this revenue officer. Accordingly, IRS often does not assign the TFRP assessment to a revenue officer for collection, and when it does, it may not assign it to the same revenue officer who is responsible for collecting unpaid taxes from the business. In reviewing the sample of TFRP assessments selected as part of our audit of IRS’s fiscal year 2007 financial statements, we found that half of the TFRP assessments had not been assigned to a revenue officer by the time of our audit. Of those that had been assigned, over half of the TFRP assessments had not been assigned to the same revenue officer who was working the related business case. Assigning the collection efforts against the business and the TFRP assessments to different revenue officers can result in the responsible owners/officers being able to continue to use the business to fund a personal lifestyle while not remitting payroll taxes. For example, in one of our case studies the owner was assessed a TFRP, but continued to draw a six-figure income while not remitting amounts withheld from the salaries of the business’s employees. For egregious cases, taking strong collection actions against the owner’s personal assets may be a more effective means of either getting the business to be compliant or convincing the owner to close the non-compliant business to prevent the further accumulation of unpaid payroll taxes. IRS collection officials stated that attempting to assign the same revenue officer both the TFRP assessments and the business payroll tax case for collection would overload the revenue officers with work and result in fewer high-priority payroll tax cases being worked. This view, however, stems from separating the collection efforts of the business and the individual and not considering the business’s unpaid payroll taxes and the TFRP assessment as a single case. In essence, the TFRP assessment is the same tax debt as the business’s payroll tax debt; the assessment is merely another means through which IRS can attempt to collect the monies withheld from a business’s employees for income, Social Security, and Hospital Insurance taxes that were not remitted to the government. This view that the payroll tax debt and the TFRP assessment are essentially the same tax debt is reinforced by IRS’s practice of crediting all related parties’ accounts whenever a collection is made against either assessment. Prior studies have found that IRS’s practice of assigning TFRP assessments a lower priority than business cases has not been very successful for collecting the unpaid taxes. In its own 2005 study of TFRP cases, IRS reported that it had assessed over $11.9 billion in TFRP assessments (including interest) between 1996 and 2004, yet had collected only 8 percent of those assessments. IRS policies have not resulted in effective steps being taken against egregious businesses to prevent the further accumulation of unpaid payroll taxes. Our audit found thousands of businesses that had accumulated more than a dozen tax quarters of unpaid payroll tax debt. IRS policies state that revenue officers must stop businesses from accumulating payroll tax debt and instructs revenue officers to use all appropriate remedies to bring the tax debtor into compliance and to immediately stop any further accumulation of unpaid taxes. IRS policies further state that if routine case actions have not stopped the continued accumulation of unpaid payroll taxes, revenue officers should consider seizing the business’s assets or pursuing a TFRP against the responsible parties. However, IRS successfully pursued fewer than 700 seizure actions in fiscal year 2007. We were unable to determine how many of those seizure actions were taken against payroll tax debtors. Regarding TFRPs, as discussed previously, IRS does not always assess the TFRPs timely, and IRS does not prioritize the TFRP assessment against the owner as highly as it does the unpaid payroll taxes of the business. This can result in little collection action being taken against the parties responsible for the failure to remit the withheld payroll taxes. When a business repeatedly fails to comply after attempts to collect, IRS policies state that the business should be considered an egregious offender and IRS should take aggressive collection actions, including threats of legal action that can culminate in court-ordered injunctions for the business to stop accumulating unpaid payroll taxes or face closure. However, IRS obtained less than 10 injunctions in fiscal year 2007 to stop businesses from accumulating additional payroll taxes. Revenue officers we spoke to believe the injunctive relief process to be too cumbersome to use effectively in its present form. One revenue officer stated that because of the difficulty in carrying out the administrative and judicial process to close a business through injunctive relief, he had not attempted to take such action in over a decade. IRS is taking some action to attempt to address this issue by piloting a Streamline Injunctive Relief Team to identify cases and develop procedures to quickly move a case from administrative procedures to judicial actions. These procedures will be used for the most egregious taxpayers when the revenue officer can establish that additional administrative procedures would be futile. Similar to IRS, all of the state tax collection officials we contacted told us that their revenue department’s primary goal was to prevent businesses from continuing to flaunt tax laws and to stop them from accumulating additional tax debt. These officials said that after a business had been given a period of time to comply with its current tax obligations and begin paying past taxes, state tax collection officials changed their focus to one of “stopping the bleeding.” As such, some have made the policy decision to seek to close non-compliant businesses. To the extent IRS is not taking effective steps to deal with egregious payroll tax offenders that repeatedly fail to comply with the tax laws, businesses may continue to withhold taxes from employees’ salaries but divert the funds for other purposes. Although IRS has made the collection of unpaid payroll taxes one of its top priorities, IRS has not established goals or measures to assess its progress in collecting or preventing the accumulation of payroll tax debt. Performance measurement and monitoring, however, support resource allocation and other policy decisions to improve an agency’s operations and the effectiveness of its approach. Performance monitoring can also help an agency by measuring the level of activity (process), the number of actions taken (outputs), or the results of the actions taken (outcomes). Although IRS does have a broad array of operational management information available to it, we did not identify any specific performance measures associated with payroll taxes or TFRP assessments. While IRS has caseload and other workload reports for local managers (to measure process and outputs), these localized reports are not rolled up to a national level to allow IRS managers to monitor the effectiveness or efficiency of its collection and enforcement efforts. These operational reports do contain information about unpaid payroll and TFRP case assignments, but they are used primarily to monitor workload issues, not program effectiveness. For example, IRS has developed some reports that identify “over-aged” cases (those that have not been resolved within a certain length of time) and that identify businesses that continue to accrue additional payroll tax debt, but these reports are designed for workload management. To report on its outcomes or the effectiveness of its operations, IRS reports on overall collection statistics and presents that information in the Management Discussion and Analysis section of its annual financial statement and in its IRS Data Book. However, IRS does not specifically address unpaid payroll taxes as a part of this reporting. IRS officials stated that they do not have specific lower-level performance measures that target collection actions or collection results for unpaid payroll taxes or TFRP assessments. Such performance measures could be useful to serve as an early warning system to management or as a vehicle for improving IRS’s approach or actions. In our discussions with IRS revenue officers concerning some of the egregious payroll tax offenders included in our case studies, the officers noted that having certain additional tools available to them could allow them to more effectively deal with recalcitrant businesses. In discussions with a number of state tax collection officials, we found that several states had already developed and were effectively using the types of tools IRS revenue officers said would be beneficial to them. For example, while the Internal Revenue Code prohibits IRS from publicly disclosing federal tax information without taxpayer consent,an increasing number of states—at least 19, including New Jersey, Connecticut, Indiana, Louisiana, and California—are seeking to increase tax collections by publicizing the names of those with delinquent tax bills. In California, a recent law mandates the state to annually publish the names of the top 250 personal and corporate state tax debtors with at least $100,000 in state tax debt. Public disclosure of tax debtors can be very effective. Just threatening to publish the names of tax offenders can bring some into compliance, while actually appearing on a tax offender list can bring about societal pressure to comply. In California, 26 tax debtors threatened with public disclosure stepped forward to settle their tax debts and thus avoided appearing on the list; in Connecticut, the state claims the public disclosure of tax debtors has resulted in over $100 million in collections from the first 4 years of the program. The potential public disclosure of tax debtors may also encourage greater tax compliance among the general population of taxpayers to avoid potentially being on the list. As another example, while IRS has the authority to levy a tax debtor’s income and assets when there is a demand for payment and there has been a refusal or an inability to pay by the taxpayer subject to the levy, IRS officials stated that they often have difficulty using levies to collect unpaid payroll taxes. They noted that the levy may be made against funds in a bank account at a certain point in time when little or no funds are available. They also noted, and in our case studies we found, that IRS sometimes has difficulty identifying which banks or financial institutions a tax debtor is using. This is the case because tax debtors will often change financial institutions to avoid IRS levies. However, several states use legal authorities to assist in identifying levy sources. States such as Kentucky, Maryland, Massachusetts, Indiana, and New Jersey have enacted legislation for matching programs or entered into agreements with financial institutions to participate in matching bank account information against state tax debts. This matching allows states to more easily identify potential levy sources and simplifies the financial institution’s obligations to respond to multiple levies. IRS is working with at least one state to investigate the potential for this matching, but in our discussions with IRS collection officials they stated that IRS has not sought legislation or agreements with financial institutions. Our analysis of unpaid payroll tax debt found substantial evidence of abusive and potentially criminal activity related to the federal tax system by businesses and their owners or officers. We identified tens of thousands of businesses that filed 10 or more tax returns acknowledging that the business owed payroll taxes, yet failed to remit those taxes to the government. While much of the tax debt may be owed by those with little ability to pay, some abuse the tax system, willfully diverting amounts withheld from their employees’ salaries to fund their business operations or their own personal lifestyle. In addition to owing payroll taxes for multiple tax periods and accumulating tax debt for years, many of the owners and officers of these businesses are repeat offenders. We identified owners who were involved in multiple businesses, all of which failed to remit payroll taxes as required. In total, IRS records indicate that over 1,500 owners/officers had been found by IRS to be responsible for non-payment of payroll taxes at 3 or more businesses and that 18 business owners/officers were found by IRS to be responsible for not paying the payroll taxes for over 12 separate businesses. It should be noted that these numbers represent only those responsible individuals who IRS found acted willfully in the non-payment of the businesses’ payroll taxes and who were assessed TFRPs—these figures do not represent the total number of repeat offenders with respect to non-payment of payroll taxes. Table 2 shows the number of individuals with TFRPs for two or more businesses. Our audits and investigations of 50 case study businesses with tax debt found substantial evidence of abuse and potential criminal activity related to the tax system. All of the case studies involved businesses that had withheld taxes from their employees’ paychecks and diverted the money to fund business operations or for personal gain. Table 3 shows the results of 12 of the case studies we performed. Businesses that withhold money from their employees’ salaries are required to hold those funds in trust for the federal government. Willful failure to remit these funds is a breach of that fiduciary responsibility and is a felony offense. A business’s repeated failure to remit payroll taxes to the government over long periods of time affects far more than the collection of the unpaid taxes. First, allowing businesses to continue to not remit payroll taxes affects the general public’s perception regarding the fairness of the tax system, a perception that may result in lower overall compliance. Second, because of failure of businesses to remit payroll taxes, the burden of funding the nation’s commitments, including Social Security and Hospital Insurance Trust Fund payments, falls more heavily on taxpayers who willingly and fully pay their taxes. Third, the failure to remit payroll taxes can give the non-compliant business an unfair competitive advantage because that business can use those funds that should have been remitted for taxes to either lower overall business costs or increase profits. Businesses that fail to remit payroll taxes may also under bid tax-compliant businesses, causing them to lose business and encouraging them to also become non-compliant. Fourth, allowing businesses to continue accumulating unpaid payroll taxes has the effect of subsidizing their business operations, thus enriching tax abusers or prolonging the demise of a failing business. Fifth and last, in an era of growing federal deficits and amidst reports of an increasingly gloomy fiscal outlook, the federal government cannot afford to allow businesses to continue to accumulate unpaid payroll tax debt with little consequence. For these reasons, it is vital that IRS use the full range of its collection tools against businesses with significant payroll tax debt and have performance measures in place to monitor the effectiveness of IRS’s actions to collect and prevent the further accumulation of unpaid payroll taxes. Businesses that continue to accumulate unpaid payroll tax debt despite efforts by IRS to work with them are demonstrating that they are either unwilling or unable to comply with the tax laws. In such cases, because the decision to not file or remit payroll taxes is made by the owners or responsible officers of a business, IRS should consider strong collection action against both the business and the responsible owners or officers to prevent the further accumulation of unpaid payroll taxes and to collect those taxes for which the business and owners have a legal and fiduciary obligation to pay. IRS faces difficult challenges in balancing the use of aggressive collection actions against taxpayer rights and individuals’ livelihoods. However, to the extent IRS does not pursue aggressive collection actions against businesses with multiple quarters of unpaid payroll taxes, there is a significant concern as to whether IRS is acting in the best interests of the federal government, the employees of the businesses involved, the perceived fairness of the tax system, or overall compliance with the tax laws. Therefore, it is incumbent upon IRS to revise its approach and develop performance measures that include the appropriate use of the full range of available enforcement tools against egregious offenders to prevent their businesses from accumulating tax debt. It is also incumbent upon IRS to proactively seek out and appropriately implement other tools (particularly those with demonstrated success at the state level) to enhance IRS’s ability to prevent the further accumulation of unpaid payroll taxes and to collect those taxes that are owed. Although IRS does need to work with businesses to try to gain voluntary tax compliance, for businesses with demonstrated histories of egregious abuse of the tax system, IRS needs to alter its approach to include focusing on stopping the accumulation of additional unpaid payroll tax debt by egregious businesses. Our companion report being released today contains six recommendations to IRS to address issues regarding its ability to prevent the further accumulation of unpaid payroll taxes and collect such taxes. The recommendations include (1) developing a process and performance measures to monitor collection actions taken by revenue officers against egregious payroll tax offenders and (2) developing procedures to more timely file notice of federal tax liens against egregious businesses and assess penalties to hold responsible parties personally liable for not remitting withheld payroll taxes. Mr. Chairmen and Members of the Subcommittee, this concludes my statement. I would be pleased to answer any questions that you or other members of the committee and subcommittee have at this time. For future contacts regarding this testimony, please contact Steven J. Sebastian at (202) 512-3406 or sebastians@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO previously reported that federal contractors abuse the tax system with little consequence. While performing those audits, GAO noted that much of the tax abuse involved contractors not remitting to the government payroll taxes that were withheld from salaries. As a result, GAO was asked to review the Internal Revenue Service's (IRS) processes and procedures to prevent and collect unpaid payroll taxes and determine (1) the magnitude of unpaid federal payroll tax debt, (2) the factors affecting IRS's ability to enforce compliance or pursue collections, and (3) whether some businesses with unpaid payroll taxes are engaged in abusive or potentially criminal activities with regard to the federal tax system. To address these objectives, GAO analyzed IRS's tax database, performed case study analyses of payroll tax offenders, and interviewed collection officials from IRS and several states. IRS records show that, as of September 30, 2007, over 1.6 million businesses owed over $58 billion in unpaid federal payroll taxes, including interest and penalties. Some of these businesses took advantage of the existing tax enforcement and administration system to avoid fulfilling or paying federal tax obligations--thus abusing the federal tax system. Over a quarter of payroll taxes are owed by businesses with more than 3 years (12 tax quarters) of unpaid payroll taxes. Some of these business owners repeatedly accumulated tax debt from multiple businesses. For example, IRS found over 1,500 individuals to be responsible for non-payment of payroll taxes at three or more businesses, and 18 were responsible for not remitting payroll taxes for a dozen different businesses. Although IRS has powerful tools at its disposal to prevent the further accumulation of unpaid payroll taxes and to collect the taxes that are owed, IRS's current approach does not provide for their full, effective use. IRS's overall approach to collection focuses primarily on gaining voluntary compliance--even for egregious payroll tax offenders--a practice that can result in minimal or no actual collections for these offenders. Additionally, IRS has not always promptly filed liens against businesses to protect the government's interests and has not always taken timely action to hold responsible parties personally liable for unpaid payroll taxes. GAO selected 50 businesses with payroll tax debt as case studies and found extensive evidence of abuse and potential criminal activity in relation to the federal tax system. The business owners or officers in our case studies diverted payroll tax funds for their own benefit or to help fund business operations.
In general, the term “gatekeeping” refers to the responsibilities and activities that entities—VA, Education, and Labor—undertake to determine whether postsecondary educational and training programs and institutions meet federal requirements. Although the standards, procedures, and methods used by the entities may differ, the overriding purpose of gatekeeping remains the same regardless of the programs or agencies involved. To assess the overlap that occurs, it is important to first understand each of the three agencies’ particular gatekeeping approaches. VA administers a number of programs designed to assist individuals in gaining access to postsecondary education or training for a specific occupation. VA generally provides its assistance in the form of payments to veterans, service persons, reservists, and certain spouses and dependents. Before an individual entitled to VA education assistance can obtain money for an education or training program, the program must be approved by an SAA, or by VA in those cases in which an SAA has not been contracted to perform the gatekeeping work. In all, 61 SAAs existed in the 50 states, the District of Columbia, and Puerto Rico during 1994. SAAs are responsible both for determining which courses should be approved and for ensuring that schools are complying with the schools’ established standards relating to the course or courses that have been approved. According to a VA official, SAAs are generally expected to make an annual supervisory visit to each school with enrolled education beneficiaries. In fiscal year 1994, about 95 percent of SAA staff performed these primary functions for academic and vocational schools, with the remaining 5 percent covering apprenticeship and other OJT training programs. Contract costs paid to each SAA by VA primarily represent reimbursements to the state for salaries and travel and an allowance for administrative expenses. For budgetary purposes, costs are allocated using formula-driven guidelines and are largely dependent on such factors as projected school or training program work loads, state employee salary schedules, and the distances SAA officials must travel to inspect or supervise schools or training programs. SAA contracts have been the focus of cost-cutting activity in recent years. VA officials said that before fiscal year 1988, VA was spending about $17 to $18 million annually for SAA contracts. Starting in fiscal year 1988, the Congress set an annual funding cap of $12 million. For fiscal year 1994, the 61 SAAs requested VA funding totaling $14.4 million but received $12 million. These requests were to support a total of 164 professional staff in SAAs whose staffing ranged from 12.3 positions to less than 0.5 position. For fiscal year 1995, the Congress increased the cap to $13 million. Most of the aid associated with Education’s programs is provided in the form of grants and guaranteed student loans under title IV of the Higher Education Act of 1965, as amended. In fiscal year 1994, postsecondary student aid administered by Education totaled more than $32 billion, with more than 6.6 million students receiving some form of assistance. Education’s approach involves activities conducted by a gatekeeping “triad” composed of accrediting agencies, state licensing agencies, and Education itself. In order for students attending a school to receive title IV financial aid, the school must be (1) accredited by an entity recognized for that purpose by the Secretary of Education, (2) licensed or otherwise legally authorized to provide postsecondary education in the state in which it is located, and (3) certified to participate in federal student aid programs by Education. Each part of the gatekeeping triad has its own responsibilities. Although specific responsibilities differ, parts of the triad may be evaluating similar areas, such as aspects of a school’s curriculum, students’ progress, or the school’s financial capability to participate in title IV programs. Accreditation is an essential step in Education’s gatekeeping process, in that unaccredited schools or programs are ineligible to participate in title IV programs. The process of accreditation is a nongovernmental peer evaluation that is performed by more than 90 accrediting associations of regional or national scope. Each accrediting body applies a relevant set of standards to the institution, department, or program under review. Those that meet the standards become accredited. To participate in title IV programs, each educational institution must also have legal authority to operate in the state in which it is located. At the state level, licensing or other approval is conducted by a state agency. Each of the states has its own agency structure, and each state can choose its own set of standards. Education’s own responsibilities include determining the administrative and financial capacity of schools to participate in title IV programs and monitoring the performance of accrediting and licensing bodies. In all, more than 7,500 postsecondary institutions were certified to participate in title IV student aid programs by Education in 1994. Apprenticeship programs are a focus of Labor’s gatekeeping activities. Under the National Apprenticeship Act of 1937, Labor establishes and promotes labor standards to safeguard the welfare of apprentices. Eligibility for various federal programs, including VA education assistance to veterans attending apprenticeship programs, is conditioned upon conformance to these standards. The standards require, for example, that an apprenticeship program (1) provide for periodic review and evaluation of the apprentice’s progress in job performance and related instruction and (2) prepare appropriate progress records documenting such reviews. Labor’s Bureau of Apprenticeship and Training determines whether a program conforms to Labor’s standards. If the program is found to be in conformance, it can be “registered,” either by Labor or by a state apprenticeship agency or council that Labor has recognized. After examining gatekeepers’ activities, comparing their assessment standards, and conducting other analyses, we determined that most SAA activity overlapped work done by others. More specifically, an estimated 87 percent of SAA staff time, costing about $10.5 million of the $12 million spent by VA in fiscal year 1994, was spent reviewing and approving courses at academic and vocational schools that were also accredited by Education-approved agencies (see fig. 1). An estimated 3 percent of SAA staff time, costing about $400,000, was spent assessing apprenticeships, but we could not readily determine whether this activity overlapped Labor’s efforts. The remaining portion of SAA staff time, costing about $1.1 million, was spent on gatekeeping functions that did not overlap the efforts of other entities. Most SAA activity occurred at academic and vocational schools that had been accredited by nationally recognized accrediting agencies—part of the activity of Education’s gatekeeping triad. In fiscal year 1994, SAAs reviewed and approved 6,294 academic and vocational schools that had been accredited by accrediting agencies. These schools were also potentially subject to the two other parts of Education’s gatekeeping triad. We examined how likely it was that these schools had also been certified by Education itself. We selected a judgmental sample of five states (Mississippi, Vermont, Washington, West Virginia, and Wyoming) and the District of Columbia. For these six jurisdictions, we obtained (1) a list from VA of 273 SAA-approved vocational and academic schools that had also been accredited and (2) a list from Education of all schools that were Education-certified. In all, 255 (93 percent) of the schools on the VA list were also Education-certified. While SAA reviews may differ somewhat from those conducted by Education gatekeepers, SAAs and Education use similar standards for approving education and training programs. Both VA and Education base their standards for approving or certifying schools and courses on federal laws and regulations. We identified 15 key standards in the law and regulations that academic and vocational schools must meet to be approved by SAAs (see app. IV). We compared these key standards with those used by accrediting bodies, states, and Education and found them to be similar (see app. V). Examples follow. A school seeking SAA approval must have a policy that gives veterans appropriate credit for previous education and training. Of the seven accrediting agencies whose standards we reviewed, five required schools to have such a policy, and the policies were similar. Schools seeking SAA approval must also demonstrate that they have sufficient financial resources to ensure their proper operation and to fulfill their commitment to provide quality education for their students. Both Education and accreditation agencies had similar requirements concerning financial resources. The possibility exists that SAA reviews of apprenticeship programs also overlap Labor’s gatekeeping efforts. The law requires SAA approval of an apprenticeship if a student in the program is to receive VA educational assistance. Before approving such a program, an SAA must determine that the training establishment and its apprentice courses are in conformance with Labor’s standards of apprenticeship. However, VA regulations do not require that an SAA-approved apprenticeship program be registered by Labor. While the potential for overlap exists, we were unable to determine if it actually occurred because data were not available to determine whether SAA-approved programs were also registered by Labor. About 9 percent of SAAs’ staff effort did not overlap other gatekeeping efforts. This portion of SAA activity fell into two categories: approval of unaccredited schools and programs, and approval of OJT programs other than apprenticeships. Unaccredited institutions. Under the law, SAAs may approve courses of study at unaccredited institutions, thereby making veterans eligible to receive assistance for attending. By contrast, Education’s regulations generally require schools to be accredited before they are certified, thereby making students eligible for title IV programs. As of September 30, 1994, SAAs had approved courses of study for veterans at 534 unaccredited academic and vocational schools. The SAA staff that reviewed and approved these schools—about 7 percent of SAA staff—did not duplicate Education’s efforts. Other OJT programs. SAAs also review and approve other OJT programs that do not qualify as apprenticeship programs and that are not subject to review and registration by Labor. SAAs’ efforts to assess other OJT programs thus did not overlap Labor’s gatekeeping efforts. We estimate that for fiscal year 1994, these approvals took about 2 percent of SAA staff time. The substantial amount of overlap that occurred between SAA and other gatekeepers’ efforts raises questions about whether SAA efforts should continue at their current level. We estimated that 87 percent of the approval effort expended by SAAs related to schools and programs also subject to accreditation by Education-approved entities. Also, in our review of six jurisdictions, 93 percent of the accredited schools were also certified by Education to participate in title IV student aid programs. School certification involves applying standards that are similar to those used by SAAs. On its face, an SAA review of courses of study at an Education-certified school would appear to add only marginal value. The same may be true for SAA reviews of apprenticeship programs, though the lack of information precludes us from determining if overlap exists with Labor’s oversight. We believe an opportunity exists for reducing federal expenditures by over $10 million annually through the elimination of overlapping SAA gatekeeping efforts. VA and SAA efforts would be better focused on such activities as reviewing courses offered by unaccredited schools, for which no other form of federal oversight currently exists. The Congress may wish to consider whether it is necessary for VA to continue contracting with SAAs to review and approve educational programs at schools that have already been reviewed and certified by Education. We requested comments on a draft of this report from the Secretaries of Education and Veterans Affairs. Education provided several clarifying and technical suggestions, which we incorporated where appropriate. In general, VA said that it has reservations about relying upon Education’s gatekeeping system to ensure the integrity and quality of education and training programs made available to VA education program beneficiaries. VA’s two principal comments were that the draft report did not elaborate on the specific mechanisms or organizational elements within Education that are in place to ensure that the requirements of title 38 of the U.S. Code are met and it is questionable whether accreditation, in the absence of funding for the state postsecondary review entities (SPRE) program, will accomplish the approval, monitoring, and supervisory requirements of the laws governing VA education programs. In the report, we do discuss Education’s gatekeeping triad composed of accrediting agencies, state licensing agencies, and Education itself, which performs the same basic function as SAAs for many of the same schools. Under title 38, the essential responsibility of SAAs is to determine which courses should be approved and to ensure that schools are complying with their established standards relating to the courses that have been approved before an individual entitled to VA education assistance can obtain money for an education or training program. Education’s gatekeeping triad does similar work: assessing whether schools and training programs offer education of sufficient quality for students to receive federal financial assistance under title IV of the Higher Education Act, as amended. In fiscal year 1994, the Department of Education provided more than $32 billion in financial aid to 6.6 million students. The SPRE program has never been fully operational, and only nine states’ SPREs had been approved by Education as of September 30, 1995. Thus, the elimination of SPRE funding should have little impact on the operation of the gatekeeping triad. In addition, before the SPRE program was initiated, the majority of education and training programs approved by SAAs were offered by schools that were also accredited and certified by Education’s gatekeeping system. And, as illustrated in this report, we found that both VA and Education gatekeepers apply similar standards in determining educational program acceptability at the same schools. VA also said that the role states and SAAs perform in approving education and training programs should continue and that it believes that such a function should not be centralized at the federal level. However, as noted in our report, just as the SAA functions are not totally centralized at the federal level, neither are the gatekeeping efforts of Education’s triad, which relies on the nonfederal work of accrediting entities and state licensing bodies to perform an important portion of the school approval work. The full text of VA’s comments appears in appendix VI of this report. Copies of this report are being sent to the Chairman and Ranking Minority Member, House Committee on Veterans’ Affairs; the Secretaries of Veterans Affairs, Education, and Labor; appropriate congressional committees; and other interested parties. Please call me at (202) 512-7014 if you or your staff have any questions regarding this report. Major contributors include Joseph J. Eglin, Jr., Assistant Director; Charles M. Novak; Daniel C. Jacobsen; and Robert B. Miller. To determine the functions of SAAs, we reviewed various VA and SAA documents, including regulations, policies, procedures, contracts, budget submissions, training manuals, and congressional testimony. We also held discussions with VA, SAA, and National Association of State Approving Agencies officials. On the basis of these efforts and additional discussions with officials from Education and Labor, we confirmed that the work of Education and Labor gatekeepers would be most appropriate to compare with SAA gatekeeping work. As an indicator of overlapping or duplicative functions, we analyzed SAAs’ gatekeeping activities for fiscal year 1994 to determine the extent that schools with SAA-approved courses of study were also reviewed as part of Education’s gatekeeping system. Since much of the SAA data we needed for analysis were not centrally available from VA, the VA central office gathered the information we requested from its regional offices and provided it to us. We did not verify the accuracy of this information. VA was unable to readily provide a listing of SAA-approved apprenticeship programs or to determine whether such approved programs were also registered by Labor. Therefore, we had no basis on which to determine the existence or the extent of overlapping functions between SAAs and Labor for apprenticeship programs. As an indicator of the similarities between Education and VA gatekeeping work, we identified, from the law and VA regulations, key standards used by SAAs in reviewing schools and educational courses and compared them with standards used by Education in evaluating schools for participation in title IV programs. The focus of our review was overlapping and duplicative functions between SAAs and other entities; we were not asked to analyze the effectiveness of these functions. SAAs administer VA’s largest education benefits programs: the Montgomery G.I. Bill, the Post-Vietnam Era Veterans’ Educational Assistance, and the Survivors’ and Dependents’ Educational Assistance programs. In fiscal year 1994, these programs served 453,973 trainees at an estimated cost of about $1 billion (see table II.1), an average of $2,223 per trainee. The Montgomery G.I. Bill, which covers veterans, military personnel, and selected reservists, is the largest program and accounts for over 85 percent of the total funds expended. Funds expended (in thousands) VA categorizes the types of training allowed under its educational programs as academic—degree and certain professional programs at institutions of vocational—noncollege degree, vocational, or technical diploma or apprenticeship—OJT typically requiring a minimum of 2,000 hours’ work experience supplemented by related classroom instruction, leading to journeyman status in a skilled trade; and other OJT—typically requiring supervised job instruction for a period of not less than 6 months and not more than 2 years, leading to a particular occupation. During fiscal year 1994, over 91 percent of VA education beneficiaries received academic training at institutions of higher learning (see fig. II.1). The focus of accrediting bodies is to determine the quality of education or training provided by the institutions or programs they accredit. In general, institutions of higher education are permitted to operate with considerable independence and autonomy. As a consequence, American educational institutions can vary widely in the character and quality of their programs. To ensure a basic level of quality, the practice of accreditation arose in the United States as a means of conducting nongovernmental peer evaluation of educational institutions and programs. Private educational associations of regional or national scope have adopted standards reflecting the qualities of a sound educational program and have developed procedures for evaluating institutions or programs to determine whether they are operating at basic levels of quality. Educational accreditation can be institutional or specialized. Institutional accreditation involves assessing the educational quality of an entire institution; this type of accreditation is used when each of an institution’s parts is seen as contributing to the achievement of the institution’s objectives. At the end of fiscal year 1994, the Secretary of Education recognized nine institutional accrediting commissions or agencies, covering six geographical regions of the country, as qualified to perform accreditation. In addition, eight national institutional accrediting commissions or agencies were recognized by the Secretary. Specialized, or programmatic, accreditation usually applies to particular programs, departments, or schools. Most of the specialized accrediting agencies review units within higher education institutions that have been institutionally accredited. At the end of fiscal year 1994, 74 specialized accrediting agencies were also recognized by the Secretary as qualified to perform accreditation throughout the nation. State licensing agencies authorize educational institutions to operate within their borders. Schools must be licensed by each state in order to participate in the title IV program. In addition to licensing agencies, several states have created SPREs under the Higher Education Amendments of 1992, in part, to reduce program fraud and abuse. Under the 1992 amendments, the federal government provided funding for states that choose to create SPREs to produce a more active and consistent state role in the gatekeeping structure. SPREs are charged with developing review standards, in consultation with institutions in the state, for approval by the Secretary of Education. SPREs then use these standards as criteria for reviewing educational institutions referred to them by the Secretary. Those institutions that do not satisfy SPRE review standards may be required to comply or cease participating in title IV programs. The future of SPREs is in doubt because their funding was rescinded by the 104th Congress (P.L. 104-19). As the federal representative in the gatekeeping triad, the role of Education is varied. First, Education is responsible for determining the administrative and financial capacity of institutions to participate in title IV programs. It also determines whether each applicant school has met all eligibility requirements (including accreditation and state licensing) before it certifies the school for participation in title IV programs. Finally, Education monitors and oversees the responsibilities of the other two triad members by recognizing and publishing a list of those accrediting agencies the Secretary believes are reliable authorities as to the quality of education or training offered by institutions of higher education and ensuring that these agencies have appropriate standards for conducting their accreditation work and evaluating and approving (or disapproving) each SPRE’s review standards and referring specific educational institutions to a SPRE for review. We identified from the law and regulations the following key standards that VA and SAAs used in reviewing education and training programs at participating schools. 1. Information in school catalogs is to cover such things as enrollment requirements; student progress (that is, grading and absences) and conduct; refunds; schedule of charges; course outlines; faculty; and school calendar. 2. Schools are to maintain adequate records of and enforce policies on student progress and conduct, including attendance records for nondegree programs. 3. Schools are to maintain records of and proper credit for students’ previous education. 4. Schools or courses are to be accredited by a nationally recognized agency. Alternatively, course quality, content, and length are to be consistent with similar courses of other schools, with recognized accepted standards. 5. Course credit is to be awarded in standard semester or quarter hours or by college degree, or courses are to lead to a vocational objective and certificate of completion. 6. Space, equipment, facilities, and instructional material should be adequate. 7. Schools should have a sufficient number of adequately educated and experienced personnel. 8. Schools’ personnel are to be of good reputation and character. 9. Schools are to be financially sound. 10. Schools should maintain a pro rata refund policy for student tuition and charges. 11. Schools’ advertising, sales, and enrollment practices should not be erroneous, deceptive, or misleading. 12. Schools must comply with various government safety codes and regulations. 13. Schools’ courses of study must have had a 2-year period of operation prior to enrollment of students receiving VA program benefits (except training establishment courses). 14. A school is precluded from approval when more than 85 percent of its enrolled students are having their costs paid in part by the school or VA. 15. Under certain conditions, courses offered at a school branch or extension may be approved in combination with courses offered at the parent facility. We reviewed the standards of seven accrediting bodies as representative of the 91 accreditors that were recognized nationally by the Secretary of Education at the end of fiscal year 1994. Four accrediting bodies were specialized program accreditors covering the entire nation, and three were institutional accreditors covering various regions of the country. The seven accrediting bodies’ standards we reviewed follow. The Accrediting Bureau of Health Education Schools’ Manual for Allied Health Education Schools, 5th edition, 1989. The Bureau accredits private and proprietary postsecondary health education institutions and specialized programs (primarily certificate or associate degree) for medical assistant and medical laboratory technician. The American Assembly of Collegiate Schools of Business’ Achieving Quality and Continuous Improvement Through Self-Evaluation and Peer Review: Standards for Accreditation in Business Administration and Accounting, April 1994. The Assembly accredits any institutionally accredited collegiate institution offering degrees in business administration and accounting. The American Culinary Federation Educational Institute Accrediting Commission’s Policies, Procedures, and Standards, April 1994. The Commission accredits programs that award postsecondary certificates or associate degrees in the culinary arts or food service management areas at accredited institutions or to nationally registered apprenticeship programs. The Computer Science Accreditation Commission of the Computing Sciences Accreditation Board’s Criteria for Accrediting Programs in Computer Science in the United States, June 1992. The Board accredits 4-year baccalaureate programs in computer science. The Middle States Association of Colleges and Schools Commission on Higher Education’s Characteristics of Excellence in Higher Education: Standards for Accreditation, February 1994 (five states and the District of Columbia, Puerto Rico, and the Virgin Islands). The Commission accredits degree-granting institutions of higher education. The North Central Association of Colleges and Schools Commission on Institutions of Higher Education’s Handbook of Accreditation, September 1994 (19 states). The Commission accredits degree-granting institutions of higher education. The Northwest Association of Schools and Colleges Commission on Colleges’ Accreditation Handbook, 1994 edition (seven states). The Commission accredits institutions, rather than specific programs, whose principal programs lead to formal degrees, associate and higher. We reviewed the state review standards for SPREs that are provided in federal regulation 34 C.F.R., part 667, subpart C. The standards we reviewed included the following rules and procedures that Education uses. To determine whether an educational institution qualifies in whole or in part as an eligible higher education institution under the Higher Education Act: 34 C.F.R., part 600. To determine a higher education institution’s financial responsibility: 34 C.F.R. 668.15, and to determine its administrative capability: 34 C.F.R. 668.16. To ensure that accrediting agencies are, for the Higher Education Act and other federal purposes, reliable authorities as to the quality of education or training offered by the higher education institutions or programs they accredit: 34 C.F.R., part 602. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO determined the extent to which state approving agencies (SAA) assessment activities overlap the efforts of other agencies. GAO found that: (1) $10.5 million of the $12 million paid to SAA in 1994 was spent to conduct assessments already performed by the Department of Education; (2) these assessments involved reviews of accredited academic and vocational schools; (3) the remaining SAA assessment activities did not overlap the activities of other agencies, since they involved on-the-job training programs and unaccredited schools; and (4) although SAA use evaluation standards that differ from those of other reviewing agencies, SAA activity should be reduced to schools and programs not subject to Department of Education approval.
Congress authorized State’s ATA program in 1983 through the Foreign Assistance Act. According to the legislation, and as noted above, the purpose of ATA is “(1) to enhance the antiterrorism skills of friendly countries by providing training and equipment to deter and counter terrorism; (2) to strengthen the bilateral ties of the United States with friendly governments by offering concrete assistance in this area of great mutual concern; and (3) to increase respect for human rights by sharing with foreign civil authorities modern, humane, and effective antiterrorism techniques.” ATA offers a wide range of counterterrorism assistance to partner nations, but most assistance consists of (1) training courses on tactical and strategic counterterrorism issues and (2) grants of counterterrorism equipment, such as small arms, bomb detection equipment, vehicles, and computers. DS/T/ATA also provides specialized consultations to partner nations on specific counterterrorism issues on an as-needed basis. ATA curricula and training focus on enhancing critical counterterrorism capabilities, which cover issues such as crisis management and response, cyberterrorism, dignitary protection, bomb detection, airport security, border control, kidnap intervention and hostage negotiation and rescue, response to incidents involving weapons of mass destruction, countering terrorist finance, and interdiction of terrorist organizations. According to DS/T/ATA, all of its courses emphasize law enforcement under the rule of law and sound human rights practices. DS/T/ATA provides training primarily through contract employees and interagency agreements with other U.S. law enforcement agencies. DS/T/ATA selects, oversees, and evaluates all contracted instructors. According to DS/T/ATA, most instructors are retired law enforcement or military personnel who have expertise specific to the ATA curricula. DS/T/ATA provides training both onsite in the partner nation and at facilities in the United States, depending on the nature of the course and the availability of special equipment and necessary facilities. However, in fiscal year 2007, DS/T/ATA delivered nearly 90 percent of all training overseas due, in part, to the lack of domestic facilities in the United States during a transition in contracting for U.S.-based facilities. ATA has provided increasingly more assistance overseas over the past several years. An S/CT official noted that the trend reflects a recognition that training is generally more effectively delivered in the partner nation. DS/T/ATA has provided most overseas assistance by sending instructors to the partner nation to conduct a specific course. The partner nation and the U.S. embassy provide support in designating a facility or training site and assisting DS/T/ATA headquarters staff with other logistical issues. DS/T/ATA has established an in-country training presence through bilateral arrangements with six priority partner nations: Afghanistan, Colombia, Indonesia, Kenya, Pakistan, and the Philippines. These countries were the largest recipients of program assistance from fiscal year 2002 through fiscal year 2007. In general, these programs included permanent training facilities such as classrooms, computer labs, and shooting and demolition ranges, which DS/T/ATA used to provide training on an ongoing basis. Each of the in-country programs has a permanently posted in-country ATA program manager, along with other ATA staff at the U.S. post in the host nation—in some cases, in-country staff included trainers and course instructors. (See fig. 1.) ATA is State’s largest counterterrorism program, and receives appropriations under the Nonproliferation, Anti-Terrorism, Demining, and Related Programs account. Fiscal year 2002 appropriations for ATA increased to about $158 million— over six times the level of funding appropriated in fiscal year 2000. Appropriations for the program have fluctuated since fiscal year 2002, and increased to over $175 million in fiscal year 2007, including supplemental appropriations. (See fig. 2.) From fiscal years 2002 to 2007, program assistance for the top 10 recipients of ATA allocations ranged from about $11 million to about $78 million. The top 10 recipients represented about 57 percent of ATA funding allocated for training and training-related activities over the 6-year period. ATA funding for the other 89 partner nations that received assistance during this period ranged from $9,000 to about $10.7 million. (See app. II for additional information on ATA funding for specific partner nations.) The Coordinator for Counterterrorism, the head of S/CT, is statutorily charged with the overall supervision (including policy oversight of resources) and coordination of the U.S. government’s counterterrorism activities. The broadly mandated role of the Assistant Secretary for Diplomatic Security, the head of the Bureau of Diplomatic Security, includes implementing security programs to protect diplomatic personnel and advise chiefs of mission on security matters. Specific roles and responsibilities for S/CT and DS/T/ATA regarding ATA are described in a 1991 internal policy guidance memorandum, the Omnibus Diplomatic Security Act of 1986, and incorporated into State’s Foreign Affairs Manual. Table 1 provides a summary of key responsibilities described in the guidance. As shown in table 1, S/CT is responsible for leading the initial assessment of a partner nation’s counterterrorism needs, and DS/T/ATA is responsible for developing annual, country-specific plans. Under current program operations, DS/T/ATA conducts an initial assessment of a new participant nation’s counterterrorism capabilities, and conducts subsequent assessments—referred to as program reviews— every 2 to 3 years thereafter. In general, the needs assessments include input from the embassy teams, but the assessments themselves are conducted by technical experts contracted by DS/T/ATA. According to DS/T/ATA, the purpose of the needs assessment and program review process is to determine the forms of assistance for a partner nation to detect, deter, deny, and defeat terrorism; and to evaluate program effectiveness. ATA lacks guidance beyond a tiered list of priority countries and assistance is not systematically aligned with counterterrorism needs. S/CT provides minimal policy guidance to help determine ATA priorities and ensure that assistance provided supports broader U.S. policy goals. In addition, S/CT and DS/T/ATA did not systematically use country-specific needs assessments and program reviews to plan what types of assistance to provide partner nations in accordance with State policy guidance. The assessments we reviewed had weaknesses and inconsistencies. In accordance with the 1991 State policy guidance memorandum, S/CT prepares a tiered list of countries to help prioritize and determine where to provide ATA assistance. However, S/CT provides little additional guidance to DS/T/ATA regarding program priorities and how to allocate program funding. Additionally, other factors besides those reflected in the tiered list influence which countries receive assistance. According to State officials, S/CT places countries on the tiered list in one of four priority categories based on criteria that address several factors, including country-specific threats and the level and depth of diplomatic and political engagement in a country. State officials indicated that other factors also may be considered in determining whether and where a country is placed on the list, such as the presence of a U.S. military base or a planned international sporting or cultural event with U.S. participation. Since 2006, S/CT has reviewed and discussed the tiered list—including changes, additions, or deletions—with DS/T/ATA during quarterly meetings. DS/T/ATA officials stated that DS/T/ATA was able to provide more substantial input and suggestions for the latest version of the tiered list because S/CT provided a draft list to DS/T/ATA for comment for the first time prior to the August 2007 meeting. As of August 2007, over 70 countries were on the list, with 12 to 24 countries in each of the four categories. However, countries were not ranked or prioritized within each category. In addition to the quarterly meetings, S/CT told us that they had established a series of regional roundtable discussions in 2006 between S/CT regional subject experts and DS/T/ATA counterparts. According to the S/CT official, the roundtables are intended as a means of identifying priority countries and their counterterrorism needs for purposes of developing budget requests. S/CT provides little guidance to DS/T/ATA beyond the tiered list, although the 1991 State policy guidance memorandum states that S/CT’s written policy guidance for the program should include suggested country training priorities. State’s Office of Inspector General previously reported that earlier versions of S/CT’s tiered list included additional guidance, such as the rationale for support, and suggested areas for training. However, S/CT began providing increasingly abbreviated guidance as its responsibilities beyond ATA grew after September 11, 2001. While S/CT provides some additional guidance to DS/T/ATA during quarterly meetings and on other occasions, DS/T/ATA officials in headquarters and the field stated they received little or no guidance from S/CT beyond the tiered list. Officials responsible for the ATA in-country program in Colombia stated they had minimal interaction with S/CT. As a result, neither S/CT nor DS/T/ATA can ensure that program assistance provided to specific countries supports broader U.S. antiterrorism policy goals. Other factors beyond S/CT’s tiered list of countries, such as unforeseen events or new governmental initiatives, also influence which countries receive program assistance. We found that 10 countries on the tiered list did not receive ATA assistance in fiscal year 2007, while 13 countries not on the tiered list received approximately $3.2 million. S/CT and DS/T/ATA officials stated that assistance does not always align with the tiered list because U.S. foreign policy objectives sometimes cause State, in consultation with the President’s National Security Council, to provide assistance to a non-tiered-list country. According to the 1991 State policy guidance memorandum and DS/T/ATA standard operations procedures, ATA country-specific needs assessments and program reviews are intended to guide program management and planning. However, S/CT and DS/T/ATA did not systematically use the assessments to determine what types of assistance to provide to partner nations or develop ATA country-specific plans. In addition, the assessments we reviewed had several weaknesses and inconsistencies. Although the 1991 State policy memorandum states that S/CT should lead the assessment efforts, a senior S/CT official stated that S/CT lacks the capacity to do so. As a result, DS/T/ATA has led interagency assessment teams in recent years, but the assessments and recommendations for types of assistance to be provided may not fully reflect S/CT policy guidance concerning overall U.S. counterterrorism priorities. DS/T/ATA officials responsible for five of the top six recipients of ATA support—Colombia, Kenya, Indonesia, Pakistan, and the Philippines—did not consistently use ATA country needs assessments and program reviews in making program decisions or to create annual country assistance plans. DS/T/ATA officials responsible for the in-country programs in four of these countries had not seen the latest assessments for their respective countries. While some officials responsible for three of these five in- country programs stated they had reviewed at least one of the assessments conducted for their countries since 2000, the officials said that the assessments were either not useful or that they were used for informational purposes only. The Regional Security Officer, Deputy Regional Security Officer, and DS/T/ATA Program Manager for Kenya had not seen any of the assessments that had been conducted for the country since 2000. Although the in-country program manager for Kenya was familiar with the assessments from her work in a previous position with DS/T/ATA, she stated that in general, the assessments were not very useful for determining what type of assistance to provide. She said that the initial needs assessment for Kenya failed to adequately consider local needs and capacity. The Regional Security Officer and Assistant Regional Security Officer for Indonesia stated they had not seen the latest assessment for the country. The DS/T/ATA program manager for Indonesia said that he recalled using one of the assessments as a “frame of reference” in making program and resource decisions. The in-country program manager also recalled seeing one of the assessments, but stated that he did not find the assessment useful given the changing terrorist landscape; therefore, he did not share it with his staff. The DS/T/ATA Program Manager for Pakistan stated that decisions on the types of assistance to provide in Pakistan were based primarily on the knowledge and experience of in-country staff regarding partner nation needs, rather than the needs assessments or program reviews. He added that he did not find the assessments useful, as the issues identified in the latest (2004) assessment for the country were already outdated. We reviewed 12 of the 21 ATA country-specific needs assessments and program reviews that, according to ATA annual reports, DS/T/ATA conducted between 2000 and 2007 for five of the six in-country programs. The assessments and reviews generally included a range of recommendations for counterterrorism assistance, but did not prioritize assistance to be provided or include specific timeframes for implementation. Consequently, the assessments do not consistently provide a basis for targeting program assistance to the areas of a partner nation’s greatest counterterrorism assistance need. Only two of the assessments—a 2000 needs assessment for Indonesia and a 2003 assessment for Kenya— prioritized the recommendations, although a 2004 assessment for Pakistan and a 2005 assessment for the Philippines listed one or two recommendations as priority ATA efforts. In addition, the information included in the assessments was not consistent and varied in linking recommendations to capabilities. Of the 12 assessments we reviewed: Nine included narrative on a range of counterterrorism capabilities, such as border security and explosives detection, but the number of capabilities assessed ranged from 5 to 25. The 2001 needs assessment for Colombia included narrative on the government’s antikidnapping capability and equipment needs, but did not assess any counterterrorism capabilities. The 2002 assessment for Indonesia provided narrative on ATA assistance provided, but did not include an assessment of any counterterrorism capabilities. Only four of the assessments that assessed more than one capability linked recommendations provided to the relevant capabilities. Most of the recommendations in the assessments we reviewed were for ATA assistance, although some recommended host government actions to improve counterterrorism capability, or other U.S. government assistance. Six included capability ratings, but the types of ratings used varied. A 2003 assessment for Colombia rated eight capabilities, rating them 1 through 5 with definitions for each rating level; the 2004 assessment for Colombia rated 24 capabilities, rating them as poor, low, fair, or good, without any definitions. Two used a format that DS/T/ATA began implementing in 2001. The assessments following the new format generally included consistent types of information and clearly linked recommendations provided to an assessment of 25 counterterrorism capabilities. However, they did not prioritize recommendations or include specific timeframes for implementing the recommendations. Although the 1991 State policy memorandum states that DS/T/ATA should create annual country assistance plans that specify training objectives and assistance to be provided based upon the needs assessments and program reviews, we found that S/CT and DS/T/ATA did not systematically use the assessments to create annual plans for the five in-country programs. DS/T/ATA officials we interviewed regarding the five in-country programs stated that in lieu of relying on the assessments or country assistance plans, program and resource decisions were primarily made by DS/T/ATA officials in the field based on their knowledge and experience regarding partner nation needs. Some DS/T/ATA officials said they did not find the country assistance plans useful. The program manager for Pakistan stated that he used the country assistance plan as a guide, but found that it did not respond to changing needs in the country. The ATA program manager for Kenya said that he had not seen a country assistance plan for that country. We requested ATA country assistance plans conducted during fiscal years 2000-2006 for the five in-country programs included in our review, but S/CT and DS/T/ATA only provided three plans completed for three of the five countries. Specifically, S/CT and DS/T/ATA provided a 2006 ATA country assistance plan for Colombia, a 2007 plan for Pakistan, and a plan covering fiscal years 2006-2008 for the Philippines. DS/T/ATA officials stated that they were able to locate only draft and informal planning documents for Indonesia and Kenya, and that S/CT and DS/T/ATA did not develop plans for any programs prior to 2006. Of the three ATA country assistance plans DS/T/ATA provided, we found that the plans did not link planned activities to recommendations provided in the needs assessments and program reviews. The current plan for the Philippines included a brief reference to a 2005 needs assessment, but the plan did not identify which recommendations from the 2005 assessment were intended to be addressed by current or planned efforts. The plan for Pakistan did not mention any of the assessments conducted for that country. As a part of its responsibility, S/CT has established mechanisms to coordinate the ATA program with other U.S. government international counterterrorism training assistance and to help avoid duplication of efforts. S/CT chairs biweekly interagency working group meetings of the Counterterrorism Security Group’s Training Assistance Subgroup to provide a forum for high-level information sharing and discussion among U.S. agencies implementing international counterterrorism efforts. The Training Assistance Subgroup includes representatives from the Departments of State, Defense, Justice, Homeland Security, Treasury, and other agencies. S/CT also established the Regional Strategic Initiative in 2006 to coordinate regional counterterrorism efforts and strategy. S/CT described the Regional Strategic Initiative as a series of regionally based, interagency meetings hosted by U.S. embassies to identify key regional counterterrorism issues and develop a strategic approach to addressing them, among other goals. A senior S/CT official stated that meetings have generated new regional training priorities for ATA. As of November 2007, Regional Strategic Initiative meetings have been held for the East Africa, Eastern Mediterranean, Iraq and Neighbors, Latin America, Southeast Asia, South Asia, Trans-Sahara, and Western Mediterranean regions. Based on our review of program documents, interviews, and meetings with officials in the four countries we visited, we did not find any significant duplication or overlap among U.S. agencies’ country-specific training programs aimed at combating terrorism. Officials we met with in each of these countries noted that they participated in various embassy working group meetings, such as Counterterrorism Working Group and Law Enforcement Working Group meetings, during which relevant agencies shared information regarding operations and activities at post. DS/T/ATA officials also coordinated ATA with other counterterrorism efforts through daily informal communication among cognizant officials in the countries we visited. In response to concerns that ATA lacked elements of adequate strategic planning and performance measurement, State recently took action to define goals and measures related to the program’s mandated objectives. S/CT and DS/T/ATA, however, do not systematically assess sustainability—that is, the extent to which assistance has enabled partner nations to achieve and sustain advanced counterterrorism capabilities. S/CT and DS/T/ATA lack clear measures and processes for assessing sustainability, and program managers do not consistently include sustainability in ATA planning. State did not have measurable performance goals and outcomes related to the mandated objectives for ATA prior to fiscal year 2003, but has recently made some progress to address the deficiency. State’s Office of Inspector General recommended in 2001, 2005, and 2006 reports that S/CT and DS/T/ATA take steps to establish measurable long-term goals and evaluations of program performance. Similarly, State responded to issues raised in a 2003 Office of Management and Budget assessment of ATA by developing specific goals and measures for each of the program’s mandated objectives. Since fiscal year 2006, State planning documents, including department and bureau-level performance plans, have listed enabling partner nations to achieve advanced and sustainable counterterrorism capabilities as a key program outcome. S/CT and DS/T/ATA officials further confirmed that sustainability is the principal intended outcome and focus of program assistance. In support of these efforts, DS/T/ATA appointed a Sustainment Manager in November 2006. The Sustainment Manager’s broadly defined responsibilities include coordinating with other DS/T/ATA divisions to develop recommendations and plans to assist partner nations in developing sustainable counterterrorism capabilities. Despite progress towards establishing goals and intended outcomes, State has not developed clear measures and a process for assessing sustainability and has not integrated the concept into program planning. The Government Performance and Results Act of 1993 (GPRA) requires agencies in charge of U.S. government programs and activities to identify goals and report on the degree to which goals are met. S/CT and DS/T/ATA officials noted the difficulty in developing direct quantitative measures of ATA outcomes related to partner nations’ counterterrorism capabilities. However, GPRA and best practices cited by the Office of Management and Budget, us, and others provide flexible guidelines for agency and program managers to develop adequate measures of program effectiveness. Our past work also has stressed the importance of establishing program goals, objectives, priorities, milestones, and measures to use in monitoring performance and assessing outcomes as critical elements of program management and effective resource allocation. We found that the measure for ATA’s principal intended program outcome of sustainability is not clear. In its fiscal year 2007 Joint Performance Summary, State reported results and future year targets for the number of countries that had achieved an advanced, sustainable level of counterterrorism capability. According to the document, partner nations that achieve a sustainable level of counterterrorism would graduate from the program and no longer receive program assistance. However, program officials in S/CT and DS/T/ATA directly responsible for overseeing ATA were not aware that the Joint Performance Summary listed numerical targets and past results for the number of partner nations that had achieved sustainability, and could not provide an explanation of how State assessed the results. DS/T/ATA’s Sustainment Manager also could not explain how State established and assessed the numerical targets in the reports. The Sustainment Manager further noted that, to his knowledge, S/CT and DS/T/ATA had not yet developed systematic measures of sustainability. DS/T/ATA’s current mechanism for evaluating partner nation capabilities does not include guidance or specific measures to assess sustainability. According to program guidance and DS/T/ATA officials, needs assessments and program reviews are intended to establish a baseline of a partner nation’s counterterrorism capabilities and quantify progress through subsequent reviews. DS/T/ATA officials also asserted that the process is intended to measure the results of program assistance. However, the process does not explicitly address sustainability, and provides no specific information or instruction regarding how reviewers are to assess sustainability. Moreover, the process focuses on assessing a partner nation’s overall counterterrorism capabilities, but does not specifically measure the results of program assistance. The assessment and review process also does not provide S/CT and DS/T/ATA a means for determining whether a partner nation’s capabilities changed because of program assistance, the country’s own efforts, or through assistance provided by other U.S. agencies or third countries. The head of DS/T/ATA’s Assessment, Review, and Evaluations Unit told us that he had not received guidance to assess progress toward sustainability, and had only limited interaction with the Sustainment Manager on integrating sustainability into the assessment and review process. DS/T/ATA has not systematically integrated sustainability into country- specific assistance plans, and we found a lack of consensus among program officials about how to address the issue. In-country program managers, embassy officials, instructors, and partner nation officials we interviewed held disparate views on how to define sustainability across all ATA participant countries, and many were not aware that sustainability was the intended outcome for the program. Several program officials stated that graduating a country and withdrawing or significantly reducing program assistance could result in a rapid decline in the partner nation’s counterterrorism capabilities, and could undermine achieving other program objectives, such as improving bilateral relations. Further, although State has listed sustainability in State-level planning documents since 2006, S/CT and DS/T/ATA have not issued guidance on incorporating sustainability into country-specific planning, and none of the country assistance plans we reviewed consistently addressed the outcome. As a result, the plans did not include measurable annual objectives or planned activities targeted at enabling the partner nation to achieve sustainability. For example, Colombia’s assistance plan listed transferring responsibility for the antikidnapping training to the Colombian government and described planned activities to achieve that goal. However, the plan did not include measurable objectives to determine whether activities achieve intended results. Although the plan for the Philippines stated that the country program goal for fiscal year 2007 was to “maximize sustainment,” it did not include measures of sustainability or describe how planned activities would contribute to the intended outcome. Since 1996, State has not complied with a congressional mandate to report to Congress on U.S. international counterterrorism assistance. Additionally, State’s annual reports on ATA have contained inaccurate data regarding basic program information, do not provide systematic assessments of program results, and lack other information necessary to evaluate program effectiveness. The Foreign Assistance Act requires the Secretary of State to report annually on the amount and nature of all assistance provided by the U.S. government related to international terrorism. Since 1996, State has submitted ATA annual reports rather than the report required by the statute. The legislation that authorized ATA in 1983 required annual presentations to Congress of aggregate information on all countries that received program assistance. In 1985, Congress added a new, broader reporting obligation, requiring the Secretary of State to report on all assistance related to international terrorism provided by the U.S. government during the preceding fiscal year. Although the original ATA-specific 1983 reporting provision was repealed in 1996, the requirement for the broader report remains. S/CT is responsible for preparing the reports on U.S. international counterterrorism assistance. The S/CT official directly responsible for ATA told us that he only recently became aware of the reporting requirement and noted confusion within State over what the statute required. He also asserted that the ATA annual report, which is prepared by DS/T/ATA, and State’s annual “Patterns of Global Terrorism” report were sufficiently responsive to congressional needs. He further noted that, in his view, it would be extremely difficult for State to compile and report on all U.S. government terrorism assistance activities, especially given the significant growth of agencies’ programs since 2001. Officials in State’s Bureau of Legislative Affairs indicated that, to their knowledge, they had never received an inquiry from congressional staff about the missing reports. DS/T/ATA officials told us DS/T/ATA has continued to produce the ATA annual report to Congress even after the reporting requirement was removed in 1996. However, State has not issued DS/T/ATA’s annual report to Congress on ATA for fiscal year 2006 that was planned for release in 2007. DS/T/ATA officials noted that they did, however, complete and circulate the final report within State. Recent ATA annual reports have contained inaccurate data relating to basic program information on numbers of students trained and courses offered. For example, DS/T/ATA reported inaccurate data on program operations in ATA’s two top-funded partner nations—Afghanistan and Pakistan. Afghanistan. ATA annual reports for fiscal years 2002 to 2005 contain narrative passages describing various ATA training and training-related assistance activities for the Afghan in-country ATA program. According to these reports, 15 students were trained as part of a single training event over the 4-year period. DS/T/ATA subsequently provided us data for fiscal year 2005 training activity in Afghanistan, which corrected the participation total in that year from 15 participants in 1 training event to 1,516 participants in 12 training events. DS/T/ATA officials acknowledged the report disparities. Pakistan. According to the fiscal year 2005 ATA annual report, ATA delivered 17 courses to 335 participants in Pakistan that year. Supporting tables in the same report listed 13 courses provided to 283 participants. Further, a summary report provided to us from the DS/T/ATA internal database produced a third set of numbers describing 13 courses provided to 250 course participants during fiscal year 2005. DS/T/ATA officials acknowledged this inconsistency, but they were unable to identify which set of figures was correct. DS/T/ATA officials noted that similar inaccuracies could be presumed for prior years and for other partner nations. Significantly, the officials indicated that inaccuracies and omissions in reports of the training participants and events were due to a lack of internal policies and procedures for recording and reporting program data. In the absence of documented policies and procedures, staff developed various individual processes for collecting the information that resulted in flawed data reporting. Additionally, DS/T/ATA officials told us that its inadequate information management system and a lack of consistent data collection procedures also contributed to inaccurate reporting. DS/T/ATA’s annual reports to Congress on ATA from fiscal year 1997 to 2005 did not contain systematic assessments of program results. Further, the reports did not consistently include information on key aspects of the program, such as program activities, spending, and management initiatives that would be helpful to Congress and State in evaluating ATA. GPRA, Office of Management and Budget guidance, and our previous work provide a basis and rationale for the types of information that are useful in assessing program performance. According to this guidance, key elements of program reporting include clearly defined objectives and goals, comparisons of actual and projected performance that include at least 4 years of annual data, explanations and plans for addressing unmet goals, and reliable information on the program’s activities and financial activity. We reviewed ATA annual reports for fiscal years 1997 through 2005, and found that the reports varied widely in terms of content, scope, and format. Moreover, the annual reports did not contain systematic assessments of program performance or consistent information on program activity, such as number and type of courses delivered, types of equipment provided, and budget activity associated with program operations. In general, the reports contained varying levels of detail on program activity, and provided only anecdotal examples of program successes, from a variety of sources, including U.S. embassy officials, ATA instructors, and partner nation officials. DS/T/ATA program officials charged with compiling the annual reports for the past 3 fiscal years noted that DS/T/ATA does not have guidance on the scope, content, or format for the reports. Although ATA plays a central role in State’s broader effort to fight international terrorism, deficiencies in how the program is guided, managed, implemented, and assessed could limit the program’s effectiveness. Specifically, minimal guidance from S/CT makes it difficult to determine the extent to which program assistance directly supports broader U.S. counterterrorism policy goals. Additionally, deficiencies with DS/T/ATA’s needs assessments and program reviews may limit their utility as a tool for planning assistance and prioritizing among several partner nations’ counterterrorism needs. As a result, the assessments and reviews are not systematically linked to resource allocation decisions, which may limit the program’s ability to improve partner nation’s counterterrorism capabilities. Although State has made some progress in attempting to evaluate and quantitatively measure program performance, ATA still lacks a clearly defined, systematic assessment and reporting of outcomes, which makes it difficult to determine the overall effectiveness of the program. This deficiency, along with State’s noncompliance with mandated reporting requirements, has resulted in Congress having limited and incomplete information on U.S. international counterterrorism assistance and ATA efforts. Such information is necessary to determine the most effective types of assistance the U.S. government can provide to partner nations in support of the U.S. national security goal of countering terrorism abroad. Congress should reconsider the requirement that the Secretary of State provide an annual report on the nature and amount of U.S. government counterterrorism assistance provided abroad given the broad changes in the scope and nature of U.S. counterterrorism assistance abroad, in conjunction with the fact that the report has not been submitted since 1996. We recommend that the Secretary of State take the following four actions: 1. Revisit and revise internal guidance (the 1991 State policy memorandum and Foreign Affairs Manual, in particular) to ensure that the roles and responsibilities for S/CT and DS/T/ATA are still relevant and better enable State to determine which countries should receive assistance and what type, and allocate limited ATA resources. 2. Ensure that needs assessments and program reviews are both useful and linked to ATA resource decisions and development of country- specific assistance plans. 3. Establish clearer measures of sustainability, and refocus the process for assessing the sustainability of partner nations’ counterterrorism capabilities. The revised evaluation process should include not only an overall assessment of partner nation counterterrorism capabilities, but also provide guidance for assessing the specific outcomes of ATA. 4. Comply with the congressional mandate to report to Congress on U.S. international counterterrorism assistance. State provided us oral and written comments (see app. III) on a draft of this report. State also provided technical comments which we have incorporated throughout the report, as appropriate. Overall, State agreed with our principal findings and recommendations to improve its ATA program guidance, the needs assessment and program review process, and its assessments of ATA program outcomes. State noted that the report highlights the difficulties in assessing the benefits of developing and improving long-term antiterrorism and law enforcement relationships with foreign governments. State also outlined a number of ongoing and planned initiatives to address our recommendations. Some of these initiatives were underway during the course of our review and we refer to them in the report. We will follow up with State to ensure that these initiatives have been completed, as planned. However, although State supported the matter we suggest for congressional consideration, it did not specifically address our recommendation that it comply with the congressional mandate to report on U.S. counterterrorism assistance. As agreed with your office, unless you publicly announce the contents of the report earlier, we plan no further distribution until 30 days after the report date. At that time, we will send copies of the report to interested congressional committees and to the Secretary of State. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Charles Michael Johnson, Jr. (202) 512-7331, e-mail johnsoncm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other GAO contact and staff acknowledgments are listed in appendix IV. To assess State’s guidance for determining country recipients, aligning program assistance with partner nation needs, and coordinating Antiterrorism Assistance (ATA) with other U.S. government counterterrorism programs, we Interviewed cognizant officials from the Office of Coordinator for Counterterrorism (S/CT) and the Bureau of Diplomatic Security, Office of Antiterrorism Assistance (DS/T/ATA) in Washington, D.C., including senior officials responsible for overseeing and managing ATA and ATA program managers responsible for each of the six in-country programs: Afghanistan, Colombia, Indonesia, Kenya, Pakistan, and the Philippines. Reviewed and analyzed State planning, funding, and reporting documents concerning ATA, including relevant reports from State’s Office of Inspector General on the management and implementation of ATA; S/CT’s fiscal year 2007 tiered lists of priority countries for ATA assistance and S/CT criteria for establishing the tier list; DS/T/ATA budget information for fiscal years 2000 to 2008; a 1991 State policy memorandum delineating S/CT’s and DS/T/ATA’s roles and responsibilities for ATA; relevant sections of State’s Foreign Affairs Manual summarizing roles and responsibilities for ATA; DS/T/ATA internal policy and procedure documents, including DS/T/ATA’s Assessment, Review and Evaluations Unit’s most current (2004) standard operations procedures; State documents and U.S. embassy cables regarding the Regional Strategic Initiative; and DS/T/ATA’s Annual Reports to Congress on the ATA for fiscal years 1997 to 2005. Reviewed and analyzed available country-specific program documents for five of the in-country programs—Colombia, Indonesia, Kenya, Pakistan, and the Philippines—including country-specific needs assessments conducted for each of these partner nations; country assistance plans; data on the number of ATA courses provided and personnel trained in these countries; and memoranda of intent between the U.S. government and host country governments regarding ATA in these countries for fiscal years 2000 to 2007. These five countries were among the largest six recipients of program assistance for fiscal years 2002 to 2007 and each country received a range of ATA training and other assistance during the period we reviewed. DS/T/ATA was unable to provide four of the needs assessments that, according to annual reports, were conducted for two of these countries in that time, and was only able to provide three ATA country assistance plans that were completed for three of the five countries for fiscal years 2006 to 2008. Conducted fieldwork between July and September 2007 in four countries where ATA provides a range of assistance through an in-country presence: Colombia, Indonesia, Kenya, and the Philippines. These four programs represented about 55 percent of ATA allocations for training and training- related activities in fiscal year 2006, and about 43 percent of funding in fiscal year 2007. As this was not a generalizeable sample, our observations in these four countries may not be representative of all programs. In these countries, we interviewed ATA in-country program managers, course instructors, and other contractors; U.S. embassy officials responsible for managing counterterrorism assistance and activities; and partner nation government officials. We also observed various types of ATA training and examined equipment that was provided to partner nation security units. Additionally, to assess the extent to which State establishes clear ATA goals and measures sustainability of program outcomes, and State’s reporting on U.S. international counterterrorism assistance, we Interviewed cognizant officials from S/CT and DS/T/ATA in Washington, D.C., including senior officials responsible for overseeing and managing ATA and ATA program managers responsible for each of the six in-country programs: Afghanistan, Colombia, Indonesia, Kenya, Pakistan, and the Philippines. Additionally, we interviewed cognizant officials in DS/T/ATA’s Assessment, Review, and Evaluations Unit, Training Curriculum Division, Training Delivery Division, and Training Management Division, including the Sustainment Manager. Reviewed and analyzed State strategic planning and performance reporting documents related to ATA for fiscal years 2001 to 2007, including State budget justifications, State Performance Plans; State Performance Summaries; Bureau Performance Plans; Mission Performance Plans for Afghanistan, Colombia, Indonesia, Kenya, Pakistan, and the Philippines; and DS/T/ATA annual reports to Congress on ATA as noted above. We also reviewed Office of Management and Budget’s fiscal year 2003 review of ATA and relevant State Office of Inspector General reports relating to performance measurement issues for ATA. Additionally, we reviewed all available S/CT and DS/T/ATA guidance related to assessing program performance, including internal standard operating procedure documents and course evaluation instruments, as well as ATA authorizing legislation and related revisions. To further assess State’s reporting on international counterterrorism assistance, we reviewed DS/T/ATA’s annual reports on ATA for consistency and accuracy. As noted earlier, we found some errors with these reports, and have concerns about the data on training and nontraining activities. Although we describe the errors, we did not use these data in our analyses. To assess the reliability of the data on funding to recipient countries, we interviewed ATA officials and performed some cross-checks with other sources. We determined the data on funding were sufficiently reliable for the purposes of this report. As shown in table 2, program assistance for the top 10 recipients of ATA funding from fiscal years 2002 to 2007 ranged from about $11 million to about $78 million. The top 10 funding recipients received about 57 percent of ATA funding allocated for training and training related activities over the 6-year-period. ATA has established an in-country presence in each of the top six partner nations, including in-country program staff and permanent training facilities such as classrooms, computer labs, and shooting and demolition ranges. Afghanistan received the most funding over the six-year-period. According to DS/T/ATA officials, the scope of the in-country program in Afghanistan is more narrowly defined than other ATA programs; it focuses principally on training and monitoring a Presidential Protective Service. In addition to the individual named above, Albert H. Huntington, III, and David C. Maurer, Assistant Directors; Karen A. Deans; Matthew E. Helm; Elisabeth R. Helmer; Grace Lui; and Emily T. Rachman made key contributions to this report.
The Department of State's (State) Antiterrorism Assistance (ATA) program's objectives are to provide partner nations with counterterrorism training and equipment, improve bilateral ties, and increase respect for human rights. State's Office of the Coordinator for Counterterrorism (S/CT) provides policy guidance and its Bureau of Diplomatic Security, Office of Antiterrorism Assistance, (DS/T/ATA) manages program operations. GAO assessed (1) State's guidance for determining ATA priorities, (2) how State coordinates ATA with other counterterrorism programs, (3) the extent State established ATA program goals and measures, and (4) State's reporting on U.S. international counterterrorism assistance. To address these objectives, GAO reviewed State documents and met with cognizant officials in Washington, D.C., and four ATA program partner nations. S/CT provides minimal guidance to help prioritize ATA program recipients, and S/CT and DS/T/ATA do not systematically align ATA assistance with U.S. assessments of foreign partner counterterrorism needs. S/CT provides policy guidance to DS/T/ATA through quarterly meetings and a tiered list of priority countries, but the list does not provide guidance on country counterterrorism related program goals, objectives, or training priorities. S/CT and DS/T/ATA also did not consistently use country-specific needs assessments and program reviews to plan assistance. S/CT has established mechanisms to coordinate the ATA program with other U.S. international efforts to combat terrorism. S/CT holds interagency meetings with representatives from the Departments of State, Defense, Justice, and Treasury and other agencies as well as ambassador-level regional strategic coordinating meetings. GAO did not find any significant duplication or overlap among the various U.S. international counterterrorism efforts. State has made progress in establishing goals and intended outcomes for the ATA program, but S/CT and DS/T/ATA do not systematically assess the outcomes and, as a result, cannot determine the effectiveness of program assistance. For example, although sustainability is a principal focus, S/CT and DS/T/ATA have not set clear measures of sustainability or integrated sustainability into program planning. State reporting on U.S. counterterrorism assistance abroad has been incomplete and inaccurate. S/CT has not provided a congressionally mandated annual report to Congress on U.S. government-wide assistance related to combating international terrorism since 1996. After 1996, S/CT has only submitted to Congress annual reports on the ATA program. However, these reports contained inaccurate program information, such as the number of students trained and courses offered. Additionally, the reports lacked comprehensive information on the results of program assistance that would be useful to Congress.
CMS’s method of adjusting payments to MA plans to reflect beneficiary health status has changed over time. Prior to 2000, CMS adjusted MA payments based only on beneficiary demographic data. From 2000 to 2003, CMS adjusted MA payments using a model that was based on a beneficiary’s demographic characteristics and principal inpatient diagnosis. In 2004, CMS began adjusting payments to MA plans based on the CMS-HCC model.conditions, are groups of medical diagnoses where related groups of diagnoses are ranked based on disease severity and cost. The CMS- HCC model adjusts MA payments more accurately than previous models HCCs, which represent major medical because it includes more comprehensive information on beneficiaries’ health status. The CMS-HCC risk adjustment model uses enrollment and claims data from Medicare FFS. The model uses beneficiary characteristic and diagnostic data from a base year to calculate each beneficiary’s risk For example, CMS used MA beneficiary scores for the following year.demographic and diagnostic data for 2007 to determine the risk scores used to adjust payments to MA plans in 2008. CMS estimated that 3.41 percent of 2010 MA beneficiary risk scores was attributable to differences in diagnostic coding between MA and Medicare FFS since 2007. To calculate this percentage, CMS estimated the annual difference in disease score growth between MA and Medicare FFS beneficiaries for three different groups of beneficiaries who were either enrolled in the same MA plan or in Medicare FFS from 2004 to 2005, 2005 to 2006, and 2006 to 2007. CMS accounted for differences in age and mortality when estimating the difference in disease score growth between MA and Medicare FFS beneficiaries for each period. Then, CMS calculated the average of the three estimates.estimate to 2010 MA beneficiaries, CMS multiplied the average annual difference in risk score growth by its estimate of the average length of time that 2010 MA beneficiaries had been continuously enrolled in MA plans over the previous 3 years, and CMS multiplied this result by 81.8 percent, its estimate of the percentage of 2010 MA beneficiaries who were enrolled in an MA plan in 2009 and therefore were exposed to MA coding practices. CMS implemented this same adjustment of 3.41 percent in 2011 and has announced it will implement this same adjustment in 2012. We found that diagnostic coding differences exist between MA plans and Medicare FFS and that these differences had a substantial effect on payment to MA plans. We estimated that risk score growth due to coding differences over the previous 3 years was equivalent to $3.9 billion to $5.8 billion in payments to MA plans in 2010 before CMS’s adjustment for coding differences. Before CMS reduced 2010 MA beneficiary risk scores, we found that these scores were at least 4.8 percent, and perhaps as much as 7.1 percent, higher than the risk scores likely would have been as a result of diagnostic coding differences, that is, if the same beneficiaries had been continuously enrolled in FFS (see fig. 1). Our estimates suggest that, after accounting for CMS’s 3.4 percent reduction to MA risk scores in 2010, MA risk scores were too high by at least 1.4 percent, and perhaps as much as 3.7 percent, equivalent to $1.2 billion and $3.1 billion in payments to MA plans. Our two estimates were based on different assumptions of the impact of coding differences over time. We found that the annual impact of coding differences for our study population increased from 2005 to 2008. Based on this trend, we projected risk score growth for the period 2008 to 2010 and obtained the higher estimate, 7.1 percent, of the cumulative impact of differences in diagnostic coding between MA and FFS. However, coding differences may reach an upper bound when MA plans code diagnoses as comprehensively as possible, so we produced the lower estimate of 4.8 percent by assuming that the impact of coding differences on risk scores remained constant and was the same from 2008 to 2010 as it was from 2007 to 2008. Plans with networks may have greater potential to influence the diagnostic coding of their providers, relative to plans without networks. Specifically, when we restricted our analysis to MA beneficiaries in plans with provider networks (HMOs, PPOs, and plans offered by PSOs), our estimates of the cumulative effect of differences in diagnostic coding between MA and FFS increased to an average of 5.5 or 7.8 percent of MA beneficiary risk scores in 2010, depending on the projection assumption for 2008 to 2010. Altering the year by which MA coding patterns had “caught up” to FFS coding patterns, from our original assumption of 2007 to 2005, had little effect on our results. Specifically, we estimated the cumulative impact of coding differences from 2005 to 2010 and found that our estimates for all MA plans increased slightly to 5.3 or 7.6 percent, depending on the projection assumption from 2008 to 2010. Our analysis estimating the cumulative impact of coding differences on 2010 MA risk scores suggests that this cumulative impact is increasing. Specifically, we found that from 2005 to 2008, the impact of coding differences on MA risk scores increased over time (see app. 1, table 1). Furthermore, CMS also found that the impact of coding differences While we did not have more recent data, increased from 2004 to 2008. the trend of coding differences through 2008 suggests that the impact of coding differences in 2011 and 2012 could be larger than in 2010. CMS analysis provided to us showed annual risk score growth due to coding differences to be 0.015 from 2004 to 2005, 0.015 from 2005 to 2006, 0.026 from 2006 to 2007, and 0.038 from 2007 to 2008. CMS’s estimate of the impact of coding differences on 2010 MA risk scores was smaller than our estimate due to the collective impact of three methodological differences described below. For its 2011 and 2012 adjustments, the agency continued to use the same estimate of the impact of coding differences it used in 2010, which likely resulted in excess payments to MA plans. Three major differences between our and CMS’s methodology account for the differences in our 2010 estimates. First, CMS did not include data from 2008. CMS initially announced the adjustment for coding differences in its advance notice for 2010 payment before 2008 data were available. While 2008 data became available prior to the final announcement of the coding adjustment, CMS decided not to incorporate 2008 data into its final adjustment. In its announcement for 2010 payment, CMS explains that it took a conservative approach for the first year that it implemented the MA coding adjustment. Incorporating 2008 data would have increased the size of CMS’s final adjustment. Second, CMS did not take into account the increasing impact of coding differences over time. However, without 2008 data, the increasing trend of the annual impact of coding differences is less apparent, and supports the agency’s decision to use the average annual impact from 2004 to 2007 as a proxy for the annual impact from 2007 to 2010. Third, CMS only accounted for differences in age and mortality between the MA and FFS study populations. We found that accounting for additional beneficiary characteristics explained more variation in disease score growth, and consequently improved the accuracy of our risk score growth estimate. CMS did not update its estimate in 2011 and 2012 with more current data, even though data were available. CMS did not include 2008 data in its 2010 estimate due to its desire to take a conservative approach for the first year it implemented a coding adjustment, and the agency did not update its estimate for 2011 or 2012 due to concerns about the many MA payment changes taking place. While maintaining the same level of adjustment for 2011 and 2012 maintains stability and predictability in MA payment rates, it also allows the accuracy of the adjustment to diminish in each year. Including more recent data would have improved the accuracy of CMS’s 2011 and 2012 estimates because more recent data are likely to be more representative of the year in which an adjustment was made. By not updating its estimate with more current data, CMS also did not account for the additional years of cumulative coding differences in its estimate: 4 years for 2011 (2007 to 2011) and 5 years for 2012 (2007 to 2012). While CMS stated in its announcement for 2011 payment that it would consider accounting for additional years of coding differences, CMS officials told us they were concerned about incorporating additional years using a linear methodology because it would ignore the possibility that MA plans may reach a limit at which they could no longer code diagnoses more comprehensively. We think it is unlikely that this limit has been reached. Given the financial incentives that MA plans have to ensure that all relevant diagnoses are coded, the fact that CMS’s 3.41 percent estimate is below our low estimate of 4.8 percent, and considering the increasing use of electronic health records to capture and maintain diagnostic information, the upper limit is likely to be greater than the 3 years CMS accounted for in its 2011 and 2012 estimates. In addition to not including more recent data, CMS did not incorporate the impact of the upward trend in coding differences on risk scores into its estimates for 2011 and 2012. Based on the trend of increasing impact of coding differences through 2008, shown in both CMS’s and our analysis, we believe that the impact of coding differences on 2011 and 2012 MA risk scores is likely to be larger than it was on 2010 MA risk scores. In addition, less than 1.4 percent of MA enrollees in 2011 were enrolled in a plan without a network, suggesting that our slightly larger results based on only MA plans with a network are more accurate estimates of the impact of coding differences in 2011 and 2012. By continuing to implement the same 3.41 percent adjustment for coding differences in 2011 and 2012, we believe CMS likely substantially underestimated the impact of coding differences in 2011 and 2012, resulting in excess payments to MA plans. Risk adjustment is important to ensure that payments to MA plans adequately account for differences in beneficiaries’ health status and to maintain plans’ financial incentive to enroll and care for beneficiaries regardless of their health status or the resources they are likely to consume. For CMS’s risk adjustment model to adjust payments to MA plans appropriately, diagnostic coding patterns must be similar among both MA plans and Medicare FFS. We confirmed CMS’s finding that differences in diagnostic coding caused risk scores for MA beneficiaries to be higher than those for comparable Medicare FFS beneficiaries in 2010. This finding underscores the importance of continuing to adjust MA risk scores to account for coding differences and ensuring that these adjustments are as accurate as possible. If an adjustment for coding differences is too low, CMS would pay MA plans more than it would pay providers in Medicare FFS to provide health care for the same beneficiaries. We found that CMS’s 3.41 percent adjustment for coding differences in 2010 was too low, resulting in $1.2 billion to $3.1 billion in payments to MA plans for coding differences. By not updating its methodology in 2011 or in 2012, CMS likely underestimated the impact of coding differences on MA risk scores to a greater extent in these years, resulting in excess payments to MA plans. If CMS does not update its methodology, excess payments due to differences in coding practices are likely to increase. To help ensure appropriate payments to MA plans, the Administrator of CMS should take steps to improve the accuracy of the adjustment made for differences in diagnostic coding practices between MA and Medicare FFS. Such steps could include, for example, accounting for additional beneficiary characteristics, including the most current data available, identifying and accounting for all years of coding differences that could affect the payment year for which an adjustment is made, and incorporating the trend of the impact of coding differences on risk scores. CMS provided written comments on a draft of this report, which are reprinted in appendix II. In its comments, CMS stated that it found our methodological approach and findings informative and suggested that we provide some additional information about how the coding differences between MA and FFS were calculated. In response, we added additional details to appendix I about the regression models used, the calculations used to generate our cumulative impact estimates, and the trend line used to generate our high estimate. CMS did not comment on our recommendation for executive action. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of HHS, interested congressional committees, and others. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This appendix explains the scope and methodology that we used to address our objective that determines the extent to which differences, if any, in diagnostic coding between Medicare Advantage (MA) plans and Medicare fee-for-service (FFS) affect risk scores and payments to MA plans in 2010. To determine the extent to which differences, if any, in diagnostic coding between MA plans and Medicare FFS affected MA risk scores in 2010, we used Centers for Medicare & Medicaid Services (CMS) enrollment and risk score data from 2004 to 2008, the most current data available at the time of our analysis, and projected the estimated impact to 2010. For three periods (2005 to 2006, 2006 to 2007, and 2007 to 2008), we compared actual risk score growth for beneficiaries in our MA study population with the estimated risk score growth the beneficiaries would have had if they were enrolled in Medicare FFS. Risk scores for a given calendar year are based on beneficiaries’ diagnoses in the previous year, so we identified our study population based on enrollment data for 2004 through 2007 and analyzed risk scores for that population for 2005 through 2008. Our MA study population consisted of a retrospective cohort of MA beneficiaries. We included MA beneficiaries who were enrolled in health maintenance organization (HMO), preferred provider organization (PPO), and private fee-for-service (PFFS) plans as well as plans offered by provider-sponsored organizations (PSO). Specifically, we identified the cohort of MA beneficiaries who were enrolled in MA for all of 2007 and followed them back for the length of their continuous enrollment to 2004. In addition, for beneficiaries who were enrolled in Medicare FFS and switched to MA in 2005, 2006, or 2007, we included data for 1 year of Medicare FFS enrollment immediately preceding their MA enrollment.Our MA study population included three types of beneficiaries, each of which we analyzed separately for each period: MA joiners—beneficiaries enrolled in Medicare FFS for the entire first year of each period and then enrolled in MA for all of the following year, MA plan stayers—beneficiaries enrolled in the same MA plan for the first and second year of the period, and MA plan switchers—beneficiaries enrolled in one MA plan for the first year of the period and a second MA plan in the following year. Our control population consisted of a retrospective cohort of FFS beneficiaries who were enrolled in FFS for all of 2007 and 2006. We followed these beneficiaries back to 2004 and included data for all years of continuous FFS enrollment. For both the study and control populations, we excluded data for years during which a beneficiary (1) was diagnosed with end-stage renal disease (ESRD) during the study year; (2) resided in a long-term care facility for more than 90 consecutive days; (3) died prior to July 1, 2008; (4) resided outside the 50 United States; Washington, D.C.; and Puerto Rico; or (5) moved to a new state or changed urban/rural status. We calculated the actual change in disease score—the portion of the risk score that is based on a beneficiary’s coded diagnoses—for the MA study population for the following three time periods (in payment years): 2005 to 2006, 2006 to 2007, and 2007 to 2008.disease scores that would have occurred if those MA beneficiaries were enrolled continuously in FFS, we used our control population to estimate a regression model that described how beneficiary characteristics To estimate the change in influenced change in disease score. In the regression model we used change in disease score (year 2 - year 1) as our dependent variable and included age, sex, hierarchical condition categories (HCC), HCC interaction variables, Medicaid status, and original reason for Medicare entitlement was disability as independent variables as they are specified in the CMS-HCC model. We also included one urban and one rural variable for each of the 50 United States; Washington, D.C.; and Puerto Rico as independent variables to identify beneficiary residential location. Then we used these regression models and data on beneficiary characteristics for our MA study population to estimate the change in disease scores that would have occurred if those MA beneficiaries had been continuously enrolled in FFS. We identified the difference between the actual and estimated change in disease scores as attributable to coding differences between MA and FFS because the regression model accounted for other relevant factors affecting disease score growth (see table 1). To convert these estimates of disease score growth due to coding differences into estimates of the impact of coding differences on 2010 MA risk scores, we divided the disease score growth estimates by the average MA risk score in 2010. Because 2010 risk scores were not available at the time we conducted our analysis, we calculated the average MA community risk score for the most recent data available (risk score years 2005 through 2008) and projected the trend to 2010 to estimate the average 2010 MA risk score. We projected these estimates of the annual impact of coding difference on 2010 risk scores through 2010 using two different assumptions. One projection assumed that the annual impact of coding differences on risk scores was the same from 2008 to 2010 as it was from 2007 to 2008. The other projection assumed that the trend of increasing coding difference impact over 2005 to 2008 continued through 2010 (see fig. 2). To calculate the cumulative impact of coding differences on MA risk scores for 2007 through 2010, we summed the annual impact estimates for that period and adjusted each impact estimate to account for beneficiaries who disenrolled from the MA program before 2010. The result is the cumulative impact of coding differences from 2007 to 2010 on MA risk scores in 2010.of coding differences from 2007 to 2010 on MA risk scores in 2010 for beneficiaries in MA plans with provider networks (HMOs, PPOs, and PSOs) because such plans may have a greater ability to affect provider coding patterns. We separately estimated the cumulative impact We also performed an additional analysis to determine how sensitive our results were to our assumption that coding patterns for MA and FFS were similar in 2007. CMS believes that MA coding patterns may have been less comprehensive than FFS when the CMS-HCC model was implemented, and that coding pattern differences caused MA risk scores to grow faster than FFS; therefore, there may have been a period of “catch-up” before MA coding patterns became more comprehensive than FFS coding patterns. While the length of the “catch-up” period is not known, we evaluated the impact of assuming the actual “catch-up” period was shorter, and that MA and FFS coding patterns were similar in 2005. Specifically, we evaluated the impact of analyzing two additional years of coding differences by estimating the impact of coding differences from 2005 to 2010. To quantify the impact of both our and CMS’s estimates of coding differences on payments to MA plans in 2010, we used data on MA plan bids—plans’ proposed reimbursement rates for the average beneficiary— which are used to determine payments to MA plans. We used these data to calculate total risk-adjusted payments for each MA plan before and after applying a coding adjustment, and then used the differences between these payment levels to estimate the percentage reduction in total projected payments to MA plans in 2010 resulting from adjustments for coding differences. Then we applied the percentage reduction in payments associated with each adjustment to the estimated total payments to MA plans in 2010 of $112.8 billion and accounted for reduced Medicare Part B premium payments received by CMS, which offset the reduction in MA payments (see table 2). The CMS data we analyzed on Medicare beneficiaries are collected from Medicare providers and MA plans. We assessed the reliability of the CMS data we used by interviewing officials responsible for using these data to determine MA payments, reviewing relevant documentation, and examining the data for obvious errors. We determined that the data were sufficiently reliable for the purposes of our study. In addition to the contact named above, Christine Brudevold, Assistant Director; Alison Binkowski; William Black; Andrew Johnson; Richard Lipinski; Elizabeth Morrison; and Merrile Sing made key contributions to this report.
The Centers for Medicare & Medicaid Services (CMS) pays plans in Medicare Advantage (MA)—the private plan alternative to Medicare fee-for-service (FFS)—a predetermined amount per beneficiary adjusted for health status. To make this adjustment, CMS calculates a risk score, a relative measure of expected health care costs, for each beneficiary. Risk scores should be the same among all beneficiaries with the same health conditions and demographic characteristics. Policymakers raised concerns that differences in diagnostic coding between MA plans and Medicare FFS could lead to inappropriately high MA risk scores and payments to MA plans. CMS began adjusting for coding differences in 2010. GAO (1) estimated the impact of any coding differences on MA risk scores and payments to plans in 2010 and (2) evaluated CMS’s methodology for estimating the impact of these differences in 2010, 2011, and 2012. To do this, GAO compared risk score growth for MA beneficiaries with an estimate of what risk score growth would have been for those beneficiaries if they were in Medicare FFS, and evaluated CMS’s methodology by assessing the data, study populations, study design, and beneficiary characteristics analyzed. GAO found that diagnostic coding differences exist between MA plans and Medicare FFS. Using data on beneficiary characteristics and regression analysis, GAO estimated that before CMS’s adjustment, 2010 MA beneficiary risk scores were at least 4.8 percent, and perhaps as much as 7.1 percent, higher than they likely would have been if the same beneficiaries had been continuously enrolled in FFS. The higher risk scores were equivalent to $3.9 billion to $5.8 billion in payments to MA plans. Both GAO and CMS found that the impact of coding differences increased over time. This trend suggests that the cumulative impact of coding differences in 2011 and 2012 could be larger than in 2010. In contrast to GAO, CMS estimated that 3.4 percent of 2010 MA beneficiary risk scores were attributable to coding differences between MA plans and Medicare FFS. CMS’s adjustment for this difference avoided $2.7 billion in excess payments to MA plans. CMS’s 2010 estimate differs from GAO’s in that CMS’s methodology did not include more current data, did not incorporate the trend of the impact of coding differences over time, and did not account for beneficiary characteristics other than age and mortality, such as sex, health status, Medicaid enrollment status, beneficiary residential location, and whether the original reason for Medicare entitlement was disability. CMS did not update its coding adjustment estimate in 2011 and 2012 to include more current data, to account for additional years of coding differences, or to incorporate the trend of the impact of coding differences. By continuing to implement the same 3.4 percent adjustment for coding differences in 2011 and 2012, CMS likely underestimated the impact of coding differences in 2011 and 2012, resulting in excess payments to MA plans. GAO’s findings underscore the importance of both CMS continuing to adjust risk scores to account for coding differences and ensuring that those adjustments are as complete and accurate as possible. In its comments, CMS stated that it found our findings informative. CMS did not comment on our recommendation. GAO recommends that CMS should improve the accuracy of its MA risk score adjustments by taking steps such as incorporating adjustments for additional beneficiary characteristics, using the most current data available, accounting for all relevant years of coding differences, and incorporating the effect of coding difference trends.
The criminal justice process—from arrest through correctional supervision—in any jurisdiction is generally complex and typically involves a number of participants, including police, prosecutors, defense attorneys, courts, and corrections agencies. Because of the large number of agencies involved, coordination among agencies is necessary for the process to function as efficiently as possible within the requirements of due process. That is, all involved agencies need to work together to ensure proper and efficient system operations, identify any problems that emerge, and decide how best to balance competing interests in resolving these problems. The unique structure and funding of D.C.’s criminal justice system, in which federal and D.C. jurisdictional boundaries and dollars are blended, creates additional coordination challenges. As shown in table 1, the D.C. criminal justice system consists of four D.C. agencies principally funded through local D.C. funds, six federal agencies, and three D.C. agencies principally funded through federal appropriations. According to most officials we interviewed and our own analyses, an overarching problem within the D.C. criminal justice system has been the lack of coordination among all participating agencies. Typically, federal and nonfederal criminal justice systems include the following stages: (1) arrest and booking, (2) charging, (3) initial court appearance, (4) release decision, (5) preliminary hearing, (6) indictment, (7) arraignment, (8) trial, (9) sentencing, and (10) correctional supervision. Most stages require the participation of several agencies that need to coordinate their activities for the system to operate efficiently while also meeting the requirements of due process. That is, all involved agencies need to work together to ensure that their roles and operations mesh well with those of other agencies and to identify any problems that emerge and decide how best to resolve them. Table 2 shows the stages in D.C.’s criminal justice system and the agencies that participate in each stage. As shown in the table, 7 of the 10 stages typically involve multiple agencies with different sources of funding, which results in different reporting structures and different oversight entities. For example, as many as six agencies—one D.C. (MPDC), three federal (the U.S. Attorney’s Office for the District of Columbia (USAO), U.S. Marshals Service, and D.C. Pretrial Services Agency), and two federally funded D.C. agencies (Superior Court and Public Defender Service (Defender Service)—need to coordinate their activities before the arrestee’s initial court appearance for a felony offense can occur. At the latter stages of the system, an offender’s sentencing and correctional supervision may require the participation of as many as eight agencies— one D.C.-funded agency (the Department of Corrections (DOC), five federal agencies (USAO, Federal Bureau of Prisons (BOP), U.S. Marshals Service, U.S. Parole Commission, and the Court Services and Offender Supervision Agency (Court Services)), and two federally funded D.C. agencies (Superior Court and Defender Service). At any stage, the participation of other agencies might also be required. In addition, the reporting and funding structure for these participating agencies often differs. For example, USAO, the U.S. Marshals Service, BOP, and the U.S. Parole Commission ultimately report to the U.S. Attorney General and are funded by the appropriations subcommittee that funds the Department of Justice; MPDC and the Office of the Corporation Counsel (Corporation Counsel) ultimately report to the D.C. Mayor; and Superior Court, Defender Service, Pretrial Services, and Court Services are independent of both D.C and the U.S. Department of Justice, submit their budgets to Congress, and are funded by the appropriations subcommittee for D.C. According to most officials we interviewed and our analyses, an overarching problem within the D.C. criminal justice system has been the lack of coordination among all participating agencies. Agency officials pointed to several major problem areas, each the subject of recent studies that have identified coordination issues. The areas included scheduling of court cases, which has resulted in the inefficient use of officer, attorney, and court personnel time; information technology, which uses more than 70 different systems that are not linked to facilitate the sharing of information; correctional supervision, in which poor communication among agencies has led to monitoring lapses with tragic consequences; and forensics, in which the sharing of responsibilities among agencies increases the possibility of evidentiary mishaps resulting from lapses in coordination. The scheduling of court cases has had adverse affects on several criminal justice agencies involved in case processing. As shown in table 2, MPDC, prosecutors, Defender Service, U.S. Marshals Service, Pretrial Services, Court Services, and Superior Court could be involved in the court-related processing of a case from the preliminary hearing to the trial and subsequent sentencing. Representatives from several of these agencies are typically required to be present at court trials and hearings. Because specific court times are not established, individuals who are expected to appear in court are required to be present when the court first convenes in the morning. These individuals might be required to wait at the courthouse for some period of time for the case to be called, if (1) more trials or hearings are scheduled than can be conducted, (2) any one of the involved individuals is not present or prepared, or (3) the case is continued for any number of reasons. MPDC recorded that during calendar year 1999 its officers spent 118 full-time staff years in court-related activities such as preliminary hearings and trials. While MPDC officials stated that officers often spent many hours at court waiting for cases to be called, data were not available on the proportion of the 118 full-time staff years that were attributable to actual court time compared to the time spent waiting for cases to be called, including cases that were rescheduled. CJCC selected the Council for Court Excellence and the Justice Management Institute to conduct a detailed study of criminal justice resource management issues, with particular emphasis on court case processing and the utilization of police resources. In its March 2001 report, the Council for Court Excellence and the Justice Management Institute concluded that major changes were needed in the D.C. criminal justice caseflow system to improve the system’s efficiency. Among other things, the report found inefficiencies and counterproductive policies at every stage in case processing. The report also concluded that little use was being made of modern technology in the arrest, booking, papering, and court process that could improve system operations. The Council for Court Excellence and the Justice Management Institute identified priority areas for system improvements, such as redesigning court procedures in misdemeanor cases, improving the methods used to process cases from arrest through initial court appearance by automating the involved processes, and improving the systems used to notify police officers about court dates. Congress provided $1 million for fiscal year 2001 to implement some of the recommended case management initiatives, such as a differentiated case management system for misdemeanors and traffic offenses, the papering pilot project between MPDC and Corporation Counsel, and a mental health pilot treatment project for appropriate, nonviolent pretrial release defendants in coordination with the D.C. Commission on Mental Health Services. D.C.’s criminal justice system is complex, with more than 70 different information systems in use among the various participating agencies. These systems are not linked in a manner that permits timely and useful information sharing among disparate agencies. For example, it is very difficult to obtain data to determine the annual amount of time MPDC officers spend meeting with prosecutors about cases in which prosecutors eventually decide not to file charges against the arrestee. We determined that such an analysis would require data about: (1) MPDC arrests, (2) MPDC officer time and attendance, (3) charges filed by USAO or Corporation Counsel, and (4) Superior Court case dispositions. Such data are currently maintained in separate systems with no reliable tracking number that could be used to link the information in each system for a specific case and no systematic exchange of information. This lack of shared information diminishes the effectiveness of the entire criminal justice system. For example, according to a CJCC official, there is no immediate way for an arresting officer to determine whether an arrestee is on parole or for an arrestee’s community supervision officer to know that the parolee had been arrested. Such information could affect both the charging decision and the decision whether or not to release an arrestee from an MPDC holding cell. In 1999, CJCC attempted to address problems with D.C. criminal justice information systems by preparing, among other things, an Information Technology Interagency Agreement that was adopted by CJCC members. The agreement recognized the need for immediate improvement of information technology in the D.C. criminal justice system and established the Information Technology Advisory Committee (ITAC) to serve as the governing body for justice information system development. ITAC recognized that it was difficult for a single agency involved in the criminal justice system to access information systems maintained by other agencies, and pursued developing a system that would allow an agency to share information with all other criminal justice agencies, while maintaining control over its own system. ITAC devised a District of Columbia Justice Information System (JUSTIS). In July 2000, CJCC partnered with the D.C. Office of the Chief Technology Officer in contracting with a consulting firm to design JUSTIS based on modern dedicated intranet and Web browser technology. When completed, JUSTIS is to allow each agency to maintain its current information system, while allowing the agency to access selected data from other criminal justice agencies. Effective correctional supervision, which includes probation, incarceration, and post-prison parole or supervised released for convicted defendants, requires effective coordination among participating agencies. In D.C., the stage of the criminal justice system referred to as correctional supervision involves several agencies, including: (1) Superior Court, which sentences convicted defendants and determines whether to revoke a person’s release on community supervision; (2) Court Services, which monitors offenders on community supervision; (3) DOC, which primarily supervises misdemeanants sentenced to D.C. Jail or one of several halfway houses in D.C.; (4) BOP, which supervises felons incarcerated in federal prisons; (5) the U.S. Parole Commission, which determines the prison release date and conditions of release for D.C. inmates eligible for parole;and (6) the U.S. Marshals Service, which transports prisoners. Gaps in coordination among agencies may lead to tragic consequences, such as those that occurred in the case of Leo Gonzales Wright, who committed two violent offenses while under the supervision of D.C.’s criminal justice system. Wright, who was paroled in 1993 after serving nearly 17 years of a 15-to-60 year sentence for armed robbery and second degree murder, was arrested in May 1995 for automobile theft charges, which were later dismissed. In June 1995, Wright was arrested for possession with intent to distribute cocaine. However, he was released pending trial for the drug arrest, due in part to miscommunication among agencies. Wright subsequently committed two carjackings, murdering one of his victims. He was convicted in U.S. District Court for the District of Columbia and is currently serving a life without parole sentence in federal prison at Leavenworth, KS. The outcry over the Wright case resulted in two studies, including a comprehensive review of the processing of Wright’s case prepared for the U.S. Attorney General by the Corrections Trustee in October 1999. The report included 24 recommendations to help ensure that instances similar to the Wright case do not occur. In July 2000, the Corrections Trustee issued a progress report on the implementation of recommendations from the October 1999 report. According to the Corrections Trustee, while not all recommendations in the October 1999 report have been fully implemented, progress has been made in addressing several of them. For example, with funds provided by the Corrections Trustee, DOC has purchased a new jail-management information system for tracking inmates and implemented a new policy on escorted inmate trips. In addition, in January 2000, the Corrections Trustee began convening monthly meetings of an Interagency Detention Work Group, whose membership largely parallels that of CJCC. The group and its six subcommittees have focused on such issues as the convicted felon designation and transfer process, and parole and halfway house processing. In addition to the studies and the actions of the Corrections Trustee, CJCC and Court Services are addressing the monitoring and supervision of offenders. CJCC has begun to address the issues of halfway house management and programs that monitor offenders. Court Services is developing a system in which sanctions are imposed whenever individuals violate conditions of probation or parole. Forensics is another area where lack of coordination can have adverse effects. D.C. does not have a comprehensive forensic laboratory to complete forensic analysis for use by police and prosecutors. Instead, MPDC currently uses other organizations such as the FBI, the Drug Enforcement Administration, the Bureau of Alcohol, Tobacco and Firearms, and a private laboratory to conduct much of its forensic work. MPDC performs some forensic functions such as crime scene response, firearms testing, and latent print analysis. The Office of the Chief Medical Examiner, a D.C. agency, performs autopsies and certain toxicological tests, such as the testing for the presence of drugs in the body. Coordination among agencies is particularly important because several organizations may be involved in handling and analyzing a piece of evidence. For example, if MPDC finds a gun with a bloody latent fingerprint at a crime scene, the gun would typically need to be examined by both MPDC and the FBI. In order to complete the analysis, multiple forensic disciplines (e.g., DNA or firearm examiners) would need to examine the gun. If the various forensic tests were coordinated in a multidisciplinary approach, forensic examiners would be able to obtain the maximum information from the evidence without the possibility of contaminating it. Such contamination could adversely affect the adjudication and successful resolution of a criminal investigation. In April 2000, the National Institute of Justice (NIJ) issued a report on the D.C. criminal justice system’s forensic capabilities. The report concluded that D.C. had limited forensic capacity and that limitations in MPDC prevented the effective collection, storage, and processing of crime scene evidence, which ultimately compromised the potential for successful resolution of cases. NIJ-identified deficiencies included, among other things: lengthy delays in processing evidence; ineffective communications in the collection, processing, and tracking of evidence from the crime scene; and ineffective communications between forensic case examiners and prosecutors. The NIJ report supported the development of a centralized forensic laboratory that would be shared by MPDC and the D.C. Office of the Chief Medical Examiner. The report did not examine the costs to build a comprehensive forensic laboratory. In his fiscal year 2002 proposed budget, the Mayor has allocated $7.5 million for the development of a forensics laboratory that is designed to be a state-of-the-art, full-service crime laboratory, medical examiner/morgue facility, and public health laboratory that meets all applicable National Lab Standards. We did not independently evaluate the costs and benefits of a comprehensive forensic laboratory. However, such a facility could potentially improve coordination by housing all forensic functions in one location, eliminating the need to transport evidence among multiple, dispersed locations. A principal area where D.C.’s unique structure has led to coordination problems is case processing that occurs from the time of arrest through initial court appearance. As shown in table 2, as many as six agencies need to coordinate before an arrested person’s initial court appearance for a felony offense can occur. However, we identified several aspects of the current process where a lack of coordination posed problems. For example, unlike many other major metropolitan jurisdictions, prosecutors in D.C. require an officer who is knowledgeable about the facts of the arrest to meet personally with them before they determine whether to formally charge an arrestee with a felony or misdemeanor crime. This process is called papering. During calendar year 1999, papering required the equivalent of 23 full-time officers devoted solely to these appearances, ultimately reducing the number of officers available for patrol duty by an equal amount. Efforts in 1998 and 1999 to revise the papering process failed in part because the costs and benefits of the changes under consideration were perceived by one or more participating agencies to be unevenly distributed. We focused our review on offenses prosecuted by the USAO because during 1999 they accounted for over 85 percent of MPDC officer hours expended on papering. USAO’s requirement that MPDC officers personally meet with prosecutors in order to make a charging decision appears to be unusual, particularly for misdemeanors. A 1997 Booz-Allen and Hamilton survey found that in 30 of 38 responding jurisdictions (51 were surveyed), police officers were not required to meet with prosecutors until court (i.e., trial), and in 3 cities officers were not required to appear in person until the preliminary hearing. In addition, we reviewed the charging processes in Philadelphia and Boston. Neither of these cities required face-to-face meetings with prosecutors for processing most cases. According to USAO officials, the current papering process is critical for USAO to make an initial charging decision correctly. Both USAO and MPDC officials said that the paperwork submitted to USAO for charging decisions has been of uneven quality. In the past decade, several attempts have been made to change the initial stages of case processing in D.C. These efforts—which were made by MPDC, Corporation Counsel, and USAO, in conjunction with consulting firms—involved projects in the areas of night papering, night court, and officerless papering. However, the involved agencies never reached agreement on all components of the projects, and each of the projects was ultimately suspended. The Chief of MPDC has publicly advocated the establishment of some type of arrangement for making charging decisions during the evening and/or night police shifts. Night Papering and Night Court Currently, both USAO and Corporation Counsel are only open to paper cases during typical workday hours, that is, generally from about 8:00 a.m. to 5:00 p.m., Monday through Saturday. Night papering could permit officers on evening and night shifts to generally present their paperwork to prosecutors during their shifts. Night court refers to conducting certain court proceedings, such as initial court appearance, during a late evening or night shift. Night papering would require USAO and Corporation Counsel charging attorneys to work evening hours, and night court would involve a much broader commitment of D.C. Superior Court resources as well as the participation of other agencies. Officerless papering would not require an officer to appear in person before the prosecutor, and provisions could be made for the prosecutor to contact the officer to clarify issues, as needed. In March 2001, MPDC and Corporation Counsel began an officerless papering pilot program for 17 minor offenses prosecuted by Corporation Counsel. In the absence of an automated system for completing and transmitting the forms required for documenting arrests and making charging decisions, simple entry errors resulting from entering the same information multiple times can hamper the initial stages of case processing. USAO has cited such problems as one reason that officers should be required to meet face to face with prosecutors for papering decisions. To the extent that the police do not have a reliable process for reviewing and ensuring the completeness and accuracy of the paperwork submitted to prosecutors, USAO is likely to continue to resist efforts to institute officerless papering. Even if these issues were to be successfully addressed, the distribution of costs among the participants in any revised system would still likely pose an obstacle to change. The costs of the current system of processing cases from arrest through initial court appearance are borne principally by MPDC—primarily a locally funded D.C. agency—not USAO or D.C. Superior Court, both of which are federally funded. On the other hand, instituting night papering would likely reduce MPDC’s costs, while increasing the costs borne by USAO, Corporation Counsel, and/or D.C. Superior Court, depending upon the approach taken. CJCC is the primary venue in which D.C. criminal justice agencies can identify and address interagency coordination issues. Its funding and staffing have been modest—about $300,000 annually with four staff. CJCC has functioned as an independent entity whose members represent the major organizations within the D.C. criminal justice system. According to many criminal justice officials we spoke with, during its nearly 3-year existence, CJCC has had some success in improving agency coordination, mostly in areas where all participants stood to gain from a coordinated approach to a problem. In problem areas where a solution would help one agency possibly at the expense of another, CJCC has been less successful mainly because it lacked the authority to compel agencies to address the issues. However, on balance, CJCC has provided a valuable independent forum for discussions of issues affecting multiple agencies. The D.C. Control Board did not fund CJCC for fiscal year 2001, and CJCC’s sole remaining staff member is funded by a grant. It is not known whether CJCC will continue to formally exist, and if it exists, how it will be funded, whether it will have staff, and whether it will remain independent or under the umbrella of another organization, such as the D.C. Mayor’s office. Recently, the Mayor included $169,000 in his fiscal year 2002 proposed budget to fund CJCC. While we welcome the Mayor’s support for CJCC, we believe that for CJCC to be most successful it must be viewed as independent by participating agencies. CJCC has not been required to formally report on its activities, including areas of focus, successes, and areas of continuing discussion and disagreement. The transparency provided by an annual report would help to spotlight areas of accomplishment and continuing disagreement and could assist with oversight by those responsible for funding individual CJCC members. As of November 2000, CJCC and other agencies involved in the D.C. criminal justice system reported 93 initiatives for improving the operation of the system. Most of these initiatives were ongoing; consequently, their impact had not yet been evaluated. However, we found numerous instances where participating agencies did not agree on an initiative’s goals, status, starting date, participating agencies, or results to date. This lack of agreement underscores a lack of coordination among the participating agencies that could reduce the effectiveness of these initiatives. Every criminal justice system faces coordination challenges. However, the unique structure and funding of the D.C. criminal justice system, in which federal and D.C. jurisdictional boundaries and dollars are blended, creates additional challenges. CJCC has played a useful role in addressing such coordination challenges, especially in areas where agencies perceived a common interest. However, CJCC’s uncertain future could leave D.C. without benefit of an independent entity for coordinating the activities of its unique criminal justice system. Funding CJCC through any participating agency diminishes its stature as an independent entity in the eyes of a number of CJCC’s member agencies, reducing their willingness to participate. Without a requirement to report successes and areas of continuing discussion and disagreement to each agency’s funding source, CJCC’s activities, achievements, and areas of disagreement have generally been known only to its participating agencies. This has created little incentive to coordinate for the common good, and all too often agencies have simply “agreed to disagree” without taking action. Furthermore, without a meaningful role in cataloging multiagency initiatives, CJCC has been unable to ensure that criminal justice initiatives are coordinated among all affected agencies to help eliminate duplicative efforts and maximize their effectiveness. In our March 30, 2001, report, we recommended that Congress consider: Funding an independent CJCC—with its own director and staff—to help coordinate the operations of the D.C. criminal justice system. Congressional funding ensures that CJCC will retain its identity as an independent body with no formal organizational or funding link to any of its participating members. Requiring CJCC to report annually to Congress, the Attorney General, and the D.C. Mayor on its activities, achievements, and issues not yet resolved and why.
Every criminal justice system faces coordination challenges. However, the unique structure and funding of the D.C. criminal justice system, in which federal and D.C. jurisdictional boundaries and dollars are blended, creates additional challenges. The Criminal Justice Coordinating Council (CJCC) has played a useful role in addressing such coordination challenges, especially in areas in which agencies perceived a common interest. However, CJCC's uncertain future could leave D.C. without benefit of an independent entity for coordinating the activities of its unique criminal justice system. Funding CJCC through any participating agency diminishes its stature as an independent entity in the eyes of several CJCC member agencies, reducing their willingness to participate. Without a requirement to report successes and areas of continuing discussion and disagreement to each agency's funding source, CJCC's activities, achievements, and areas of disagreement have generally been known only to its participating agencies. This has created little incentive to coordinate for the common good, and all too often agencies have simply "agreed to disagree" without taking action. Furthermore, without a meaningful role in cataloging multiagency initiatives, CJCC has been unable to ensure that criminal justice initiatives are coordinated among all affected agencies to help eliminate duplicative efforts and maximize their effectiveness. This testimony summarizes a March 2001 report (GAO-01-187).
Geospatial information describes entities or phenomena that can be referenced to specific locations relative to the Earth’s surface. For example, entities such as houses, rivers, road intersections, power plants, and national parks can all be identified by their locations. In addition, phenomena such as wildfires, the spread of the West Nile virus, and the thinning of trees due to acid rain can also be identified by their geographic locations. A geographic information system (GIS) is a system of computer software, hardware, and data used to capture, store, manipulate, analyze, and graphically present a potentially wide array of geospatial information. The primary function of a GIS is to link multiple sets of geospatial data and display the combined information as maps with many different layers of information. Each layer of a GIS map represents a particular “theme” or feature, and one layer could be derived from a data source completely different from the others. Typical geospatial data layers (themes) include cadastral— describing location, ownership, and other information about real property; digital orthoimagery—containing images of the Earth’s surface that have the geometric characteristics of a map and image qualities of a photograph; and hydrography—describing water features such as lakes, ponds, streams and rivers, canals, oceans, and coastlines. As long as standard processes and formats have been used to facilitate integration, each of these themes could be based on data originally collected and maintained by a separate organization. Analyzing this layered information as an integrated whole can significantly aid decision makers in considering complex choices, such as where to locate a new department of motor vehicles building to best serve the greatest number of citizens. Figure 1 portrays the concept of data themes in a GIS. Federal, state, and local governments and the private sector rely on geographic information systems to provide vital services to their customers. These various entities independently provide information and services, including maintaining land records for federal and nonfederal lands, property taxation, local planning, subdivision control and zoning, and direct delivery of many other public services. These entities also use geographic information and geographic information systems to facilitate and support delivery of these services. Many federal departments and agencies use GIS technology to help carry out their primary missions. For example, the Department of Health and Human Services uses GIS technology for a variety of public health functions, such as reporting the results of national health surveys; the Census Bureau maintains the Topologically Integrated Geographic Encoding and Referencing (TIGER) database to support its mission to conduct the decennial census and other censuses and surveys; and the Environmental Protection Agency maintains a variety of databases with information about the quality of air, water, and land in the United States. State governments also rely on geospatial information to provide information and services to their citizens. For example, the state of New York hosts a Web site to provide citizens with a gateway to state government services at http://www.nysegov.com/map-NY.cfm. Using this Web site, citizens can access information about state agencies and their services, locate county boundaries and services, and locate major state highways. Many other states, such as Oregon (http://www.gis.state.or.us/), Virginia (http://www.vgin.virginia.gov/index.html), and Alaska (http://www.asgdc.state.ak.us/), provide similar Web sites and services. Local governments use GISs for a variety of activities. For example, local fire departments can use geographic information systems to determine the quickest and most efficient route from a firehouse to a specific location, taking into account changing traffic patterns that occur at various times of day. Additionally, according to a March 2002 Gartner report, New York City’s GIS was pivotal in the rescue, response, and recovery efforts after the September 11, 2001, terrorist attacks. The city’s GIS provided real-time data on the area around the World Trade Center so that the mayor, governor, federal officials, and emergency response agencies could implement critical rescue, response, and recovery activities. Local governments often possess more recent and higher resolution geospatial data than the federal government, and in many cases private-sector companies collect these data under contract to local government agencies. The private sector plays an important role in support of government GIS activities because it captures and maintains a wealth of geospatial data and develops GIS software. Private companies provide services such as aerial photography, digital topographic mapping, digital orthophotography, and digital elevation modeling to produce geospatial data sets that are designed to meet the needs of governmental organizations. Figure 2 provides a conceptual summary of the many entities—including federal, state, and local governments and the private sector—that may be involved in geospatial data collection and processing relative to a single geographic location or event. Figure 3 shows the multiple data sets that have been collected by different agencies at federal, state, and local levels to capture the location of a segment of roadway in Texas. As we testified last year, the federal government has for many years taken steps to coordinate geospatial activities, both within and outside of the federal government. These include the issuance of OMB Circular A-16 and Executive Order 12906, and the E-Government Act of 2002. In addition to its responsibilities for geospatial information under the E-Government Act, OMB has specific oversight responsibilities regarding federal information technology (IT) systems and acquisition activities—including GIS—to help ensure their efficient and effective use. These responsibilities are outlined in the Clinger-Cohen Act of 1996, the Paperwork Reduction Act of 1995, and OMB Circular A-11. Table 1 provides a brief summary of federal guidance related to information technology and geospatial information. In addition to activities associated with federal legislation and guidance, OMB’s Administrator, Office of Electronic Government and Information Technology, testified before the Subcommittee last June that the strategic management of geospatial assets would be accomplished, in part, through development of a robust and mature federal enterprise architecture. In 2001, the lack of a federal enterprise architecture was cited by OMB’s E- Government Task Force as a barrier to the success of the administration’s e-government initiatives. In response, OMB began developing the Federal Enterprise Architecture (FEA), and over the last 2 years it has released various versions of all but one of the five FEA reference models. According to OMB, the purpose of the FEA, among other things, is to provide a common frame of reference or taxonomy for agencies’ individual enterprise architecture efforts and their planned and ongoing investment activities. Costs associated with collecting and maintaining geographically referenced data and systems for the federal government are significant. Specific examples of the costs of collecting and maintaining federal geospatial data and information systems include FEMA’s Multi-Hazard Flood Map Modernization Program—estimated to cost $1 billion over the next 5 years; Census’s TIGER database—modernization is estimated to have cost over $170 million between 2001 and 2004; Agriculture’s Geospatial Database—acquisition and development reportedly cost over $130 million; Interior’s National Map—development is estimated to cost about $88 million through 2008; The Department of the Navy’s Primary Oceanographic Prediction, and Oceanographic Information systems—development, modernization, and operation were estimated to cost about $32 million in fiscal year 2003; and NOAA’s Coastal Survey—expenditures for geospatial data are estimated to cost about $30 million annually. In addition to the costs for individual agency GISs and data, the aggregated annual cost of collecting and maintaining geospatial data for all NSDI- related data themes and systems is estimated to be substantial. According to a recent estimate by the National States Geographic Information Council (NSGIC), the cost to collect detailed data for five key data layers of the NSDI—parcel, critical infrastructure, orthoimagery, elevation, and roads—is about $6.6 billion. The estimate assumes that the data development will be coordinated among federal, state, and local government agencies, and the council cautions that without effective coordination, the costs could be far higher. Both Executive Order 12906 and OMB Circular A-16 charge FGDC with responsibilities that support coordination of federal GIS investments. Specifically, the committee is designated the lead federal executive body with responsibilities including (1) promoting and guiding coordination among federal, state, tribal, and local government agencies, academia, and the private sector in the collection, production, sharing, and use of spatial information and the implementation of the NSDI; and (2) preparing and maintaining a strategic plan for developing and implementing the NSDI. Regarding coordination with federal and other entities and development of the NSDI, FGDC has taken a variety of actions. It established a committee structure with participation from federal agencies and key nonfederal organizations such as NSGIC, and the National Association of Counties, and established several programs to help ensure greater participation from federal agencies as well as other government entities. In addition, key actions taken by FGDC to develop the NSDI include implementing the National Geospatial Data Clearinghouse and establishing a framework of data themes. In addition to FGDC’s programs, two other efforts are under way that aim to coordinate and consolidate geospatial information and resources across the federal government—the Geospatial One-Stop initiative and The National Map project. Geospatial One-Stop is intended to accelerate the development and implementation of the NSDI to provide federal and state agencies with a single point of access to map-related data, which in turn will enable consolidation of redundant geospatial data. OMB selected Geospatial One- Stop as one of its e-government initiatives, in part to support development of an inventory of national geospatial assets, and also to support reducing redundancies in federal geospatial assets. In addition, the portal includes a “marketplace” that provides information on planned and ongoing geospatial acquisitions for use by agencies that are considering acquiring new data to facilitate coordination of existing and planned acquisitions. The National Map is being developed and implemented by the U.S. Geological Survey (USGS) as a database to provide core geospatial data about the United States and its territories, similar to the data traditionally provided on USGS paper topographic maps. USGS relies heavily on partnerships with other federal agencies as well as states, localities, and the private sector to maintain the accuracy and currency of the national core geospatial data set as represented in The National Map. According to Interior’s Assistant Secretary—Policy, Management, and Budget, FGDC, Geospatial One-Stop, and The National Map are coordinating their activities in several areas, including developing standards and framework data layers for the NSDI, increasing the effectiveness of the clearinghouse, and making information about existing and planned data acquisitions available through the Geospatial One-Stop Web site. Regarding preparing and maintaining a strategic plan for developing and implementing the NSDI, in 1994, FGDC issued a strategic plan that described actions federal agencies and others could take to develop the NSDI, such as establishing data themes and standards, training programs, and partnerships to promote coordination and data sharing. In April 1997, FGDC published an updated plan—with input from many organizations and individuals having a stake in developing the NSDI—that defined strategic goals and objectives to support the vision of the NSDI as defined in the 1994 plan. No further updates have been made. As the current national geospatial strategy document, FGDC’s 1997 plan is out of date. First, it does not reflect the recent broadened use of geospatial data and systems by many government agencies. Second, it does not take into account the increased importance that has been placed on homeland security in the wake of the September 11, 2001, attacks. Geospatial data and systems have an essential role to play in supporting decision makers and emergency responders in protecting critical infrastructure and responding to threats. Finally, significant governmentwide geospatial efforts—including the Geospatial One-Stop and National Map projects— did not exist in 1997, and are therefore not reflected in the strategic plan. In addition to being out of date, the 1997 document lacks important elements that should be included in an effective strategic plan. According to the Government Performance and Results Act of 1993, such plans should include a set of outcome-related strategic goals, a description of how those goals are to be achieved, and an identification of risk factors that could significantly affect their achievement. The plans should also include performance goals and measures, with resources needed to achieve them, as well as a description of the processes to be used to measure progress. While the 1997 NSDI plan contains a vision statement and goals and objectives, it does not include other essential elements. These missing elements include (1) a set of outcome-related goals, with actions to achieve those goals, that would bring together the various actions being taken to coordinate geospatial assets and achieve the vision of the NSDI; (2) key risk factors that could significantly affect the achievement of the goals and objectives; and (3) performance goals and measures to help ensure that the steps being taken result in the development of the National Spatial Data Infrastructure. FGDC officials, in consultation with the executive director of Geospatial One-Stop, USGS, and participating FGDC member agencies, have initiated a “future directions” effort to begin the process of updating their existing plan. However, this activity is just beginning, and there is no time frame as to when a new strategy will be in place. Until a comprehensive national strategy is in place, the current state of ineffective coordination is likely to remain, and the vision of the NSDI will likely not be fully realized. OMB Circular A-16 directs federal agencies to coordinate their investments to facilitate building the NSDI. The circular lists 11 specific responsibilities for federal agencies, including (1) preparing, maintaining, publishing, and implementing a strategy for advancing geographic information and related spatial data activities appropriate to their mission, in support of the NSDI; (2) using FGDC standards, including metadata and other appropriate standards, documenting spatial data with relevant metadata; and (3) making metadata available online through a registered NSDI-compatible clearinghouse site. In certain cases, federal agencies have taken steps to coordinate their specific geospatial activities. For example, the Forest Service and Bureau of Land Management collaborated to develop the National Integrated Land System (NILS), which is intended to provide land managers with software tools for the collection, management, and sharing of survey data, cadastral data, and land records information. At an estimated cost of about $34 million, a single GIS—NILS—was developed that can accommodate the shared geospatial needs of both agencies, eliminating the need for each agency to develop a separate system. However, despite specific examples of coordination such as this, agencies have not consistently complied with OMB’s broader geospatial coordination requirements. For example, only 10 of 17 agencies that provided reports to FGDC reported having published geospatial strategies as required by Circular A-16. In addition, agencies’ spatial data holdings are generally not compliant with FGDC standards. Specifically, the annual report shows that, of the 17 agencies that provided reports to FGDC, only 4 reported that their spatial data holdings were compliant with FGDC standards. Ten agencies reported being partially compliant, and 3 agencies provided answers that were unclear as to whether they were compliant. Finally, regarding the requirement for agencies to post their data to the National Geospatial Data Clearinghouse, only 6 of the 17 agencies indicated that their data or metadata were published through the clearinghouse, 10 indicated that their data were not published, 1 indicated that some data were available through the clearinghouse. According to comments provided by agencies to FGDC in the annual report submissions, there are several reasons why agencies have not complied with their responsibilities under Circular A-16, including the lack of performance measures that link funding to coordination efforts. According to the Natural Resources Conservation Service, few incentives exist for cross-agency cooperation because budget allocations are linked to individual agency performance rather than to cooperative efforts. In addition, according to USGS, agencies’ activities and funding are driven primarily by individual agency missions and do not address interagency geospatial coordination. In addition to the information provided in the annual report, Department of Agriculture officials said that no clear performance measures exist linking funding to interagency coordination. OMB has recognized that potentially redundant geospatial assets need to be identified and that federal geospatial systems and information activities need to be coordinated. To help identify potential redundancies, OMB’s Administrator of E-Government and Information Technology testified in June 2003 that the agency uses three key sources of information: (1) business cases for planned or ongoing IT investments, submitted by agencies as part of the annual budget process; (2) comparisons of agency lines of business with the Federal Enterprise Architecture (FEA); and (3) annual reports compiled by FGDC and submitted to OMB. However, none of these major oversight processes have been effective tools to help OMB identify major redundancies in federal GIS investments. In their IT business cases, agencies must report the types of data that will be used, including geospatial data. According to OMB’s branch chief for information policy and technology, OMB reviews these business cases to determine whether any redundant geospatial investments are being funded. Specifically, the process for reviewing a business case includes comparing proposed investments, IT management and strategic plans, and other business cases, in an attempt to determine whether a proposed investment duplicates another agency’s existing or already-approved investment. However, business cases submitted to OMB under Circular A-11 do not always include enough information to effectively identify potential geospatial data and systems redundancies because OMB does not require such information in agency business cases. For example, OMB does not require that agencies clearly link information about their proposed or existing geospatial investments to the spatial data categories (themes) established by Circular A-16. Geospatial systems and data are ubiquitous throughout federal agencies and are frequently integrated into agencies’ mission-related systems and business processes. Business cases that focus on mission-related aspects of agency systems and data may not provide the information necessary to compare specific geospatial investments with other, potentially similar investments unless the data identified in the business cases are categorized to allow OMB to more readily compare data sets and identify potential redundancies. For example, FEMA’s fiscal year 2004 business case for its Multi-Hazard Flood Map Modernization project indicates that topographic and base data are used to perform engineering analyses for estimating flood discharge, developing floodplain mapping, and locating areas of interest related to hazards. However, FEMA does not categorize these data according to standardized spatial data themes specified in Circular A-16, such as elevation (bathymetric or terrestrial), transportation, and hydrography. As a result, it is difficult to determine whether the data overlap with other federal data sets. Without categorizing the data using the standard data themes as an important step toward coordinating that data, information about agencies’ planned or ongoing use of geospatial data in their business cases cannot be effectively assessed to determine whether it could be integrated with other existing or planned federal geospatial assets. An FEA is being constructed that, once it is further developed, may help identify potentially redundant geospatial investments. According to OMB, the FEA will comprise a collection of five interrelated reference models designed to facilitate cross-agency analysis and the identification of duplicative investments, gaps, and opportunities for collaboration within and across federal agencies. According to recent GAO testimony on the status of the FEA, although OMB has made progress on the FEA, it remains a work in process and is still maturing. OMB has identified multiple purposes for the FEA. One purpose cited is to inform agencies’ individual enterprise architectures and to facilitate their development by providing a common classification structure and vocabulary. Another stated purpose is to provide a governmentwide framework that can increase agencies’ awareness of IT capabilities that other agencies have or plan to acquire, so that agencies can explore opportunities for reuse. Still another stated purpose is to help OMB decision makers identify opportunities for collaboration among agencies through the implementation of common, reusable, and interoperable solutions. We support the FEA as a framework for achieving these ends. According to OMB’s branch chief for information policy and technology, OMB reviews all new investment proposals against the federal government’s lines of business in its Business Reference Model to identify those investments that appear to have some commonality. Many of the model’s lines of business include areas in which geospatial information is of critical importance, including disaster management (the cleanup and restoration activities that take place after a disaster); environmental management (functions required to monitor the environment and weather, determine proper environmental standards, and address environmental hazards and contamination); and transportation (federally supported activities related to the safe passage, conveyance, or transportation of goods and people). The Service Component Reference Model includes specific references to geospatial data and systems. It is intended to identify and classify IT service components (i.e., applications) that support federal agencies and promote the reuse of components across agencies. The model includes 29 types of services—including customer relationship management and the visualization service, which defines capabilities that support the conversion of data into graphical or picture form. One component of the visualization service is associated with mapping, geospatial, elevation, and global positioning system services. Identification of redundant investments under the visualization service could provide OMB with information that would be useful in identifying redundant geospatial systems investments. Finally, the Data and Information Reference Model would likely be the most critical FEA element in identifying potentially redundant geospatial investments. According to OMB, this model will categorize the government’s information along general content areas and describe data components that are common to many business processes or activities. Although the FEA includes elements that could be used to help identify redundant investments, it is not yet sufficiently developed to be useful in identifying redundant geospatial investments. While the Business and Service Component reference models have aspects related to geospatial investments, the Data and Information Reference Model may be the critical element for identifying agency use of geospatial data because it is planned to provide standard categories of data that could support comparing data sets among federal agencies. However, this model has not yet been completed and thus is not in use. Until the FEA is completed and OMB develops effective analytical processes to use it, it will not be able to contribute to identifying potentially redundant geospatial investments. OMB Circular A-16 requires agencies to report annually to OMB on their achievements in advancing geographic information and related spatial data activities appropriate to their missions and in support of the NSDI. To support this requirement, FGDC has developed a structure for agencies to use to report such information in a consistent format and for aggregating individual agencies’ information. Using the agency reports, the committee prepares an annual report to OMB purportedly identifying the scope and depth of spatial data activities across agencies. For the fiscal year 2003 report, agencies were asked to respond to several specific questions about their geospatial activities, including (1) whether a detailed strategy had been developed for integrating geographic information and spatial data into their business processes, (2) how they ensure that data are not already available prior to collecting new geospatial data, and (3) whether geospatial data are a component of the agency’s enterprise architecture. However, additional information that is critical to identifying redundancies was not required. For example, agencies were not requested to provide information on their specific GIS investments or the geospatial data sets they collected and maintained. According to the FGDC staff director, the annual reports are not meant to provide an inventory of federal geospatial assets. As a result, they cannot provide OMB with sufficient information to identify redundancies in federal geospatial investments. Further, because not all agencies provide reports to FGDC, the information that OMB has available to identify redundancies is incomplete. According to OMB’s program examiner for the Department of the Interior, OMB does not know how well agencies are complying with the reporting requirements in Circular A-16. Until the information reported by agencies is consistent and complete, OMB will not be able to effectively use it to identify potential geospatial redundancies. According to OMB officials responsible for oversight of geospatial activities, the agency’s methods have not yet led to the identification of redundant investments that could be targeted for consolidation or elimination. The OMB officials said they believe that, with further refinement, these tools will be effective in the future in helping them identify redundancies. In addition, OMB representatives told us that they are planning to institute a new process to collect more complete information on agencies’ geospatial investments by requiring agencies to report all such investments through the Geospatial One-Stop Web portal. OMB representatives told us that reporting requirements for agencies would be detailed in a new directive that OMB expects to issue by the end of summer 2004. Without a complete and up-to-date strategy for coordination or effective investment oversight by OMB, federal agencies continue to acquire and maintain duplicative data and systems. According to the initial business case for the Geospatial One-Stop initiative, about 50 percent of the federal government’s geospatial data investment is duplicative. Such duplication is widely recognized. Officials from federal and state agencies and OMB have all stated that unnecessarily redundant geospatial data and systems exist throughout the federal government. The Staff Director of FGDC agreed that redundancies continue to exist throughout the federal government and that more work needs to be done to specifically identify them. DHS’s Geospatial Information Officer also acknowledged redundancies in geospatial data acquisitions at his agency, and said that DHS is working to create an enterprisewide approach to managing geospatial data in order to reduce redundancies. Similarly, state representatives to the National States Geographic Information Council have identified cases in which they have observed multiple federal agencies funding the acquisition of similar data to meet individual agency needs. For example, USGS, FEMA, and the Department of Defense (DOD) each maintain separate elevation data sets: USGS’s National Elevation Dataset, FEMA’s flood hazard mapping elevation data program, and DOD’s elevation data regarding Defense installations. FEMA officials indicated that they obtained much of their data from state and local partners or purchased them from the private sector because data from those sources better fit their accuracy and resolution requirements than elevation data available from USGS. Similarly, according to one Army official, available USGS elevation data sets generally do not include military installations, and even when such data are available for specific installations, they are typically not accurate enough for DOD’s purposes. As a result, DOD collects its own elevation data for its installations. In this example, if USGS elevation data-collection projects were coordinated with FEMA and DOD to help ensure that the needs of as many federal agencies as possible were met through the project, potentially costly and redundant data- collection activities could be avoided. According to the USGS Associate Director for Geography, USGS is currently working to develop relationships with FEMA and DOD, along with other federal agencies, to determine where these agencies’ data-collection activities overlap. In another example, officials at the Department of Agriculture and the National Geospatial-Intelligence Agency (NGA) both said they have purchased data sets containing street-centerline data from commercial sources, even though the Census Bureau maintains such data in its TIGER database. According to these officials, they purchased the data commercially because they had concerns about the accuracy of the TIGER data. The Census Bureau is currently working to enhance its TIGER data in preparation for the 2010 census, and a major objective of the project is to improve the accuracy of its street location data. However, despite Agriculture and NGA’s use of street location data, Census did not include either agency in the TIGER enhancement project plan’s list of agencies that will be affected by the initiative. Without better coordination, agencies such as Agriculture and NGA are likely to continue to need to purchase redundant commercial data sets in the future. In summary, although various cross-government committees and initiatives, individual federal agencies, and OMB have each taken actions to coordinate the government’s geospatial investments across agencies and with state and local governments, agencies continue to purchase and maintain uncoordinated and duplicative geospatial investments. Without better coordination, such duplication is likely to continue. In order to improve the coordination of federal geospatial investments, our report recommends that the Director of OMB and the Secretary of the Interior direct the development of a national geospatial data strategy with outcome-related goals and objectives; a plan for how the goals and objectives are to be achieved; identification of key risk factors; and performance measures. Our report also recommends that the Director of OMB develop criteria for assessing the extent of interagency coordination on proposals for potential geospatial investments. Based on these criteria, funding for potential geospatial investments should be delayed or denied when coordination is not adequately addressed in agencies’ proposals. Finally, our report provides specific recommendations to the Director of OMB in order to strengthen the agency’s oversight actions to more effectively coordinate federal geospatial data and systems acquisitions and thereby reduce potentially redundant investments. Mr. Chairman, this concludes my testimony. I would be pleased to respond to any questions that you or other Members of the Subcommittee may have at this time. For further information regarding this statement, please contact me at (202) 512-6240 or by e-mail at koontzl@gao.gov. Other key contributors to this testimony included Neil Doherty, John de Ferrari, Michael P. Fruitman, Michael Holland, Steven Law, and Elizabeth Roach. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The collection, maintenance, and use of location-based (geospatial) information are essential to federal agencies carrying out their missions. Geographic information systems (GIS) are critical elements used in the areas of homeland security, healthcare, natural resources conservation, and countless other applications. GAO was asked to review the extent to which the federal government is coordinating the efficient sharing of geospatial assets, including through Office of Management and Budget (OMB) oversight. GAO's report on this matter, Geospatial Information: Better Coordination Needed to Identify and Reduce Duplicative Investments (GAO-04-703), is being released today. GAO's testimony focuses on the extent to which the federal government is coordinating the sharing of geospatial assets, including through oversight measures in place at the Office of Management and Budget (OMB), in order to identify and reduce redundancies in geospatial data and systems. OMB, cross-government committees, and individual federal agencies have taken actions to coordinate geospatial investments across agencies and with state and local governments. However, these efforts have not been fully successful due to (1) a complete and up-to-date strategic plan is missing. The existing strategic plan for coordinating national geospatial resources and activities is out of date and lacks specific measures for identifying and reducing redundancies, (2) federal agencies are not consistently complying with OMB direction to coordinate their investments, and (3) OMB's oversight methods have not been effective in identifying or eliminating instances of duplication. This has resulted from OMB not collecting consistent, key investment information from all agencies. Consequently, agencies continue to independently acquire and maintain potentially duplicative systems. This costly practice is likely to continue unless coordination is significantly improved.
The ability to produce the information needed to efficiently and effectively manage the day-to-day operations of the federal government and provide accountability to taxpayers and the Congress has been a long-standing challenge for federal agencies. To help address this challenge, many agencies are in the process of replacing their core financial systems as part of their financial management system improvement efforts. Although the implementation of any major system is not a risk-free proposition, organizations that follow and effectively implement disciplined processes can reduce these risks to acceptable levels. The use of the term acceptable levels acknowledges the fact that any systems acquisition has risks and will suffer the adverse consequences associated with defects. However, effective implementation of the disciplined processes reduces the potential for risks to occur and helps prevent those that do occur from having any significant adverse impact on the cost, timeliness, and performance of the project. A disciplined software development and acquisition process can maximize the likelihood of achieving the intended results (performance) within established resources (costs) on schedule. Although there is no standard set of practices that will ever guarantee success, several organizations, such as the Software Engineering Institute (SEI) and the Institute of Electrical and Electronic Engineers (IEEE), as well as individual experts have identified and developed the types of policies, procedures, and practices that have been demonstrated to reduce development time and enhance effectiveness. The key to having a disciplined system development effort is to have disciplined processes in multiple areas, including project planning and management, requirements management, configuration management, risk management, quality assurance, and testing. Effective processes should be implemented in each of these areas throughout the project life cycle because change is constant. Effectively implementing the disciplined processes necessary to reduce project risks to acceptable levels is hard to achieve because a project must effectively implement several best practices, and inadequate implementation of any one practice may significantly reduce or even eliminate the positive benefits of the others. Successfully acquiring and implementing a new financial management system requires a process that starts with a clear definition of the organization’s mission and strategic objectives and ends with a system that meets specific information needs. We have seen many system efforts fail because agencies started with a general need, such as improving financial management, but did not define in precise terms (1) the specific problems they were trying to solve, (2) what their operational needs were, and (3) what specific information requirements flowed from these operational needs. Instead, they plunged into the acquisition and implementation process in the belief that these specifics would somehow be defined along the way. The typical result was that systems were delivered well past anticipated milestones; failed to perform as expected; and, accordingly, were overbudget because of required costly modifications. Undisciplined projects typically show a great deal of productive work at the beginning of the project, but the rework associated with defects begins to consume more and more resources. In response, processes are adopted in the hopes of managing what later turns out to have been unproductive work. Generally, these processes are “too little, too late” because sufficient foundations for building the systems were not established or not established adequately. Experience has shown that projects for which disciplined processes are not implemented at the beginning are forced to implement them later when it takes more time and the processes are less effective. A major consumer of project resources in undisciplined efforts is rework (also known as thrashing). Rework occurs when the original work has defects or is no longer needed because of changes in project direction. Disciplined organizations focus their efforts on reducing the amount of rework because it is expensive. Experts have reported that fixing a defect during the testing phase costs anywhere from 10 to 100 times the cost of fixing it during the design or requirements phase. Projects that are unable to successfully address their rework will eventually only be spending their time on rework and the associated processes rather than on productive work. In other words, the project will continually find itself reworking items. We found that HHS had adopted some best practices in its development of UFMS. The project had support from senior officials and oversight by independent experts, commonly called independent verification and validation (IV&V) contractors. We also view HHS’ decision to follow a phased implementation to be a sound approach. However, at the time of our review, HHS had not effectively implemented several disciplined processes essential to reducing risks to acceptable levels and therefore key to a project’s success, and had adopted other practices that put the project at unnecessary risk. HHS officials told us that they had carefully considered the risks associated with implementing UFMS and that they had put in place strategies to manage these risks and to allow the project to meet its schedule within budget. However, we found that HHS had focused on meeting its schedule to implement the first phase of the new system at the Centers for Disease Control and Prevention (CDC) in October 2004, to the detriment of disciplined processes and thus had introduced unnecessary risks that may compromise the system’s cost, schedule, and performance. We would now like to briefly highlight a few of the key disciplined processes that HHS had not fully implemented at the time of our review. These matters are discussed in detail in our report. Requirements management. Requirements are the specifications that system developers and program managers use to design, develop, and acquire a system. Requirements management deficiencies have historically been a root cause of systems that do not meet their cost, schedule, and performance objectives. Effective requirements management practices are essential for ensuring the intended functionality will be included in the system and are the foundation for testing. We found significant problems in HHS’ requirements management process and that HHS had not developed requirements that were clear and unambiguous. Testing. Testing is the process of executing a program with the intent of finding errors. Without adequate testing, an organization (1) is taking a significant risk that substantial defects will not be detected until after the system is implemented and (2) does not have reasonable assurance that new or modified systems will function as planned. We found that HHS faced challenges in implementing a disciplined testing program, because, first of all, it did not have an effective requirements management system that produced clear, unambiguous requirements upon which to build its testing efforts. In addition, HHS had scheduled its testing activities, including those for converting data from existing systems to UFMS, late in the implementation cycle leaving little time to correct defects identified before the scheduled deployment in October 2004. Project management and oversight using quantitative measures. We found that HHS did not have quantitative metrics that allowed it to fully understand (1) its capability to manage the entire UFMS effort; (2) how problems in its management processes would affect the UFMS cost, schedule, and performance objectives; and (3) the corrective actions needed to reduce the risks associated with the problems identified with its processes. Such quantitative measures are essential for adequate project management oversight. Without such information, HHS management can only focus on the project schedule and whether activities have occurred as planned, not on whether the substance of the activities achieved their system development objectives. As we note in our report, this is not an effective practice. Risk management. We noted that HHS routinely closed its identified risks on the premise that they were being addressed. Risk management is a continuous process to identify, monitor, and mitigate risks to ensure that the risks are being properly controlled and that new risks are identified and resolved as early as possible. An effective risk management process is designed to mitigate the effects of undesirable events at the earliest possible stage to avoid costly consequences. In addition, HHS’ effectiveness in managing the processes associated with its data conversion and UFMS interfaces will be critical to the success of this project. For example, CDC’s ability to convert data from its existing systems to the new system will be crucial to helping ensure that UFMS will provide the kind of data needed to manage CDC’s programs and operations. The adage “garbage in garbage out” best describes the adverse impact. Furthermore, HHS expects that once UFMS is fully deployed, the system will need to interface with about 110 other systems, of which about 30 system interfaces are needed for the CDC deployment. Proper implementation of the interfaces between UFMS and the other systems it receives data from and sends data to is essential for providing data integrity and ensuring that UFMS will operate as it should and provide the information needed to help manage its programs and operations. Compounding these UFMS-specific problems are departmentwide weaknesses we have previously reported in information technology (IT) investment management, enterprise architecture, and information security. Specifically, HHS had not established the IT management processes needed to provide UFMS with a solid foundation for development and operation. Such IT weaknesses increase the risk that UFMS will not achieve planned results within the estimated budget and schedule. We will now highlight the IT management weaknesses that HHS must overcome: Investment management. HHS had weaknesses in the processes it uses to select and control its IT investments. Among the weaknesses we previously identified, HHS had not (1) established procedures for the development, documentation, and review of IT investments by its review boards or (2) documented policies and procedures for aligning and coordinating investment decision making among its investment management boards. Until HHS addresses weaknesses in its selection or control processes, IT projects like UFMS will face an increased likelihood that the projects will not be completed on schedule and within estimated costs. Enterprise architecture. While HHS is making progress in developing an enterprise architecture that incorporates UFMS as a central component, most of the planning and development of the UFMS IT investment had occurred without the guidance of an established enterprise architecture. An enterprise architecture is an organizational blueprint that defines how an entity operates today and how it intends to operate in the future and invest in technology to transition to this future state. Our experience with other federal agencies has shown that projects developed without the constraints of an established enterprise architecture are at risk of being duplicative, not well integrated, unnecessarily costly to maintain and interface, and ineffective in supporting missions. Information security. HHS had not yet fully implemented the key elements of a comprehensive security management program and had significant and pervasive weaknesses in its information security controls. The primary objectives of information security controls are to safeguard data, protect computer application programs, prevent unauthorized access to system software, and ensure continued operations. Without adequate security controls, UFMS cannot provide reasonable assurance that the system is protected from loss due to errors, fraud and other illegal acts, disasters, and incidents that cause systems to be unavailable. Finally, we believe it is essential that an agency take the necessary steps to ensure that it has the human capital capacity to design, implement, and operate a financial management system. We found that staff shortages and limited strategic workforce planning have resulted in the project not having the resources needed to effectively design, implement, and operate UFMS. We identified the following weaknesses: Staffing. HHS had not filled positions in the UFMS Program Management Office that were identified as needed. Proper human capital planning includes identifying the workforce size, skills mix, and deployment needed for mission accomplishment and to create strategies to fill the gaps. Scarce resources could significantly jeopardize the project’s success and have led to several key UFMS deliverables being significantly behind schedule. Strategic workforce planning. HHS had not yet fully developed key workforce planning tools, such as the CDC skills gap analysis, to help transform its workforce so that it can effectively use UFMS. Strategic workforce planning focuses on developing long-term strategies for acquiring, developing, and retaining an organization’s total workforce (including full- and part-time federal staff and contractors) to meet the needs of the future. Strategic workforce planning is essential for achieving the mission and goals of the UFMS project. By not identifying staff with the requisite skills to operate such a system and by not identifying gaps in needed skills and filling them, HHS has not optimized its chances for the successful implementation and operation of UFMS. To address the range of problems we have just highlighted, our report includes 34 recommendations that focus on mitigating the risks associated with this project. We made 8 recommendations related to the initial deployment of UFMS at CDC that are specifically tied to implementing critical disciplined processes. In addition, we recommended that until these 8 recommendations are substantially addressed, HHS delay the initial deployment. The remaining 25 recommendations were centered on developing an appropriate foundation for moving forward and focused on (1) disciplined processes, (2) IT security controls, and (3) human capital issues. In its September 7, 2004, response to a draft of our report, HHS disagreed regarding management of the project and whether disciplined processes were being followed. In its comments, HHS characterized the risk in its approach as the result, not of a lack of disciplined processes, but of an aggressive project schedule. From our perspective, this project demonstrated the classic symptoms of a schedule-driven effort for which key processes had been omitted or shortcutted, thereby unnecessarily increasing risk. As we mentioned at the outset of our testimony, this is a multiyear project with an estimated completion date in fiscal year 2007 and a total estimated cost of over $700 million. With a project of this magnitude and importance, we stand by our position that it is crucial for the project to adhere to disciplined processes that represent best practices. Therefore, in order to mitigate its risk to an acceptable level, we continue to believe it is essential for HHS to adopt and effectively implement our 34 recommendations. In commenting on our draft report, HHS also indicated that actions had either been taken, were under way, or were planned that address a number of our recommendations. In addition, HHS subsequently contacted us on September 23, 2004, to let us know that it had decided to delay the implementation of a significant amount of functionality associated with the CDC deployment from October 2004 until April 2005 in order to address the issues that had been identified with the project. HHS also provided us with copies of IV&V reports and other documentation that had been developed since our review. Delaying implementation of significant functionality at CDC is a positive step forward given the risks associated with the project. This delay, by itself, will not reduce the risk to an acceptable level, but will give HHS a chance to implement the disciplined processes needed to do so. HHS will face a number of challenges in the upcoming 6 months to address the weaknesses in its management of the project that were discussed in our report. At a high level, the key challenge will be to implement an event driven project based on effectively implemented disciplined processes, rather than a schedule-driven project. It will be critical as well to address the problems noted in the IV&V reports that were issued during and subsequent to our review. If the past is prologue, taking the time to adhere to disciplined processes will pay dividends in the long term. Mr. Chairman, this concludes our statement. We would be pleased to answer any questions you or other members of the Subcommittee may have at this time. For further information about this statement, please contact Jeffrey C. Steinhoff, Managing Director, Financial Management and Assurance, who may be reached at (202) 512-2600 or by e-mail at steinhoffj@gao.gov, or Keith A. Rhodes, Chief Technologist, Applied Research and Methodology Center for Engineering and Technology, who may be reached at (202) 512- 6412 or by e-mail at rhodesk@gao.gov. Other key contributors to this testimony include Kay Daly, Michael LaForge, Chris Martin, and Mel Mench. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO has previously reported on systemic problems the federal government faces in achieving the goals of financial management reform and the importance of using disciplined processes for implementing financial management systems. As a result, the Subcommittee on Government Efficiency and Financial Management, House Committee on Government Reform, asked GAO to review and evaluate the agencies' plans and ongoing efforts for implementing financial management systems. The results of GAO's review of the Department of Health and Human Services' (HHS) ongoing effort to develop and implement the Unified Financial Management System (UFMS) are discussed in detail in the report Financial Management Systems: Lack of Disciplined Processes Puts Implementation of HHS' Financial System at Risk (GAO-04-1008). In this report, GAO makes 34 recommendations focused on mitigating risks associated with the project. In light of this report, the Subcommittee asked GAO to testify on the challenges HHS faces in implementing UFMS. HHS had not effectively implemented several disciplined processes, which are accepted best practices in systems development and implementation, and had adopted other practices, that put the project at unnecessary risk. Although the implementation of any major system is not a risk-free proposition, organizations that follow and effectively implement disciplined processes can reduce these risks to acceptable levels. While GAO recognized that HHS had adopted some best practices related to senior level support, oversight, and phased implementation, GAO noted that HHS had focused on meeting its schedule to the detriment of disciplined processes. GAO found that HHS had not effectively implemented several disciplined processes to reduce risks to acceptable levels, including requirements management, testing, project management and oversight using quantitative measures, and risk management. Compounding these problems are departmentwide weaknesses in information technology management processes needed to provide UFMS with a solid foundation for development and operation, including investment management, enterprise architecture, and information security. GAO also identified human capital issues that significantly increase the risk that UFMS will not fully meet one or more of its cost, schedule, and performance objectives, including staffing and strategic workforce planning. HHS stated that it had an aggressive implementation schedule, but disagreed that a lack of disciplined processes is placing the UFMS program at risk. GAO firmly believes if HHS continues to follow an approach that is schedule-driven and shortcuts key disciplined processes, it is unnecessarily increasing its risk. GAO stands by its position that adherence to disciplined processes is crucial, particularly with a project of this magnitude and importance. HHS indicated that it plans to delay deployment of significant functionality associated with its UFMS project for at least 6 months. This decision gives HHS a good opportunity to effectively implement disciplined processes to enhance the project's opportunity for success.
The Under Secretary of Defense for Personnel and Readiness has overall responsibility for the Training Transformation Program and, through the use of the Training Transformation Executive Steering Group and the Training Transformation Senior Advisory Group, oversees the execution of three capabilities or initiatives: JNTC, the Joint Knowledge Development and Distribution Capability, and the Joint Assessment and Enabling Capability. According to the 2006 Training Transformation Implementation Plan, these 3 initiatives are designed to prepare individuals, units, and staff for the new strategic environment and to provide enabling tools and processes to carry out joint missions. Specifically: The JNTC, focusing on collective training, is expected to prepare forces by providing units and command staff with integrated live, virtual, and constructive training environments. This initiative would add enhanced service and combatant command training that emphasizes jointness and enables global training and mission rehearsal in support of combatant command operations. The Joint Knowledge Development and Distribution Capability, which focuses on individual training, is intended to prepare future decision makers and leaders to better understand joint operations and the common operational picture, as well as to respond innovatively to adversaries. It develops and distributes joint knowledge via a dynamic, global-knowledge network that provides immediate access to joint education and training resources. The Joint Assessment and Enabling Capability is expected to assist leaders in assessing the value of transformational training initiatives to individuals, organizations, and processes, and to link the impact of the Training Transformation Program to combatant commanders’ readiness requirements. This initiative is also supposed to provide the processes and tools to continuously improve joint training. The JNTC initiative, the leading initiative for training transformation, is managed by a Joint Management Office within the Joint Forces Command’s Joint Trainer Directorate. This Joint Management Office, which was established in 2003, manages the operational, technical and program resources necessary to implement the initiative. The Joint Management Office coordinates its management efforts of the initiative with the Office of the Secretary of Defense through senior and executive advisory groups. The overall purpose of the JNTC initiative is to provide a persistent capability to combatant command and service training programs to create an appropriate, realistic joint operating environment within their existing training activities. To accomplish this, DOD plans to spend about $1.5 billion on the JNTC initiative through fiscal year 2011, or 84 percent of training transformation total funding. The JNTC initiative was deemed by the Deputy Under Secretary of Defense for Readiness to be operationally capable in October 2004—indicating that the initial infrastructure of networked sites and systems needed to enhance the joint environment of training exercises was in place. During fiscal year 2005, 16 combatant command and service training events were selected by the Joint Forces Command to enhance their joint training environment through the JNTC initiative. The JNTC initiative includes several key efforts to enhance the joint training environment of combatant commands and services. These efforts include the following: Development of joint task articles. This is an effort to enhance the level of details associated with joint mission essential tasks that are identified by the combatant commands as critical to joint operations, and to provide joint trainers specific guidance for developing exercises and other joint training activities. These task articles are a set of processes, procedures, or actions that address critical horizontal (actions between services) and vertical (actions between a service and a higher joint force command) elements of interoperability for specific joint mission essential tasks. Currently, the JNTC initiative has developed 156 approved articles and has 88 under various stages of development. Joint training and experimentation network. The joint network is intended to be a large-capacity communications network that will provide persistent support to joint training exercises, service stand- alone events, exercise preparation and rehearsal, experimentation, evaluation of advanced training technologies, and evaluation of new warfighting concepts. The network, when complete, will enable the Joint Forces Command to use live, virtual, and constructive simulations in concert to enhance the joint training environment for combatant commands and services. By the end of fiscal year 2005, the joint network had been expanded to 33 sites, including one in Hawaii and one in Germany. Accreditation and certification programs. The JNTC accreditation program works toward ensuring that combatant command and service joint training programs include the appropriate joint environment for the joint tasks being trained. Accreditation is program-centric, whereby entire service and combatant command training programs are evaluated and accredited for training selected joint tasks in a joint environment that meets specific joint standards or conditions. The intent of this effort is to establish a process that ensures delivery of a recurring, consistent, realistic training environment for all units participating in joint training, regardless of the locations from which they are participating. The Joint Forces Command accredited 4 training programs during fiscal year 2005 and is expecting to accredit 23 more programs during 2006. JNTC’s certification effort concurrently ensures that the technical aspects of the training programs—the sites and the systems comprising the training infrastructure, networks, and ranges— support the accredited training program. The JNTC initiative, now 1 year after being deemed initially operational capable, is expected to reach full operational capability in 2010 when it will provide a global joint training network that allows live, virtual, and constructive participation by services, combatant commands, and coalition and interagency partners in accredited training programs. The full extent to which the JNTC initiative has improved the ability of the services and combatant commands to train jointly is not clear because DOD has not yet assessed the full impact of the JNTC initiative efforts on joint training or developed a strategy for conducting such an assessment. Based on our examination of 5 of the 16 fiscal year 2005 exercises that the Joint Forces Command helped to fund and enhance through the JNTC initiative, we found indications that the JNTC initiative has begun to improve joint training. Examples we found include increases in the use of joint objectives and increases in participation by other services. Without a comprehensive assessment of the JNTC initiative’s enhancement of joint training programs, DOD has no assurance that its investment in the initiative will produce the desired results. DOD has not yet assessed the full impact of JNTC efforts on joint training; therefore both the overall impact the JNTC initiative is having on joint training programs and the extent to which it is achieving the program’s goals are unclear. The Training Transformation Implementation Plan does not include a requirement for the JNTC Joint Management Office to assess whether the JNTC initiative has improved the ability of the services and combatant commands to train jointly. The JNTC Joint Management Office receives feedback through working with the services and combatant commands, but no formal evaluation of JNTC efforts has been conducted. The individual services and combatant commanders are aware of JNTC enhancements to their specific training programs and resulting improvements and are documenting some of these enhancements in after action reports and lessons learned reports. For example, the Navy’s preliminary assessment of its Fleet Synthetic Training — Joint 05-2 exercise stated that the value added through rapid delivery and improved interoperability and repeatability of the Fleet Synthetic training capability and the joint network will save operating costs while providing quality joint and coalition training to the warfighter. Additionally, as a result of a lesson learned during Terminal Fury 05, JNTC funds were used to develop an analytical tool that enabled analysts to search through recorded exercise model data and replay selected training exercises, which helped explain to the exercise control group and the training audience how and why a particular event occurred. However, these types of reports do not provide an overall assessment of the collective impact JNTC efforts are having on joint training. DOD’s most recent assessment of its training transformation efforts, conducted by the Joint Assessment and Enabling Capability in support of the Office of the Under Secretary of Defense for Personnel and Readiness, is known as the training transformation block assessment. This block assessment, conducted for the first time in 2005, is the primary mechanism for providing feedback to senior DOD leadership on how well DOD is meeting its training transformation goals. The block assessment is to provide an assessment every 2 years that measures, guides, and evaluates the progress of the training transformation initiatives, including the JNTC initiative. These evaluations are intended by DOD to be an innovative use of performance assessment tools, techniques, and policies, using well- defined metrics to provide a feedback capability to the leadership. Additionally, DOD recently announced its plans to conduct a Joint Training Program Review during mid-2006 to examine training transformation efforts and to realign these efforts with the recent Quadrennial Defense Review Report and program strategic guidance. Our prior work, the 2006 Quadrennial Defense Review Report, and DOD’s Training Transformation Implementation Plan emphasize the importance of establishing performance metrics that set up time frames and measurable outcomes to gauge the success of a program implementation. GAO’s Human Capital: A Guide on Assessing Strategic Training Programs and Development Efforts in the Federal Government emphasizes the importance of using program performance information to assess the progress that training and development programs make toward achieving results. The guide states that agencies should keep in mind that they need to collect data corresponding to established training objectives throughout the implementation process to refine and continually improve, deliver, and enhance learning. Furthermore, the guide asserts that it is important for agencies to develop and use outcome-oriented performance measures to ensure accountability and to assess progress toward achieving results aligned with the agencies’ missions and goals. The Quadrennial Defense Review Report emphasizes that each initiative is accountable for measuring performance and delivering results that support the departmentwide strategy. DOD’s Training Transformation Implementation Plan requires periodic reviews to assess the success of its Training Transformation Program. According to the plan, every 2 years, a formal program assessment should be conducted to measure the impact of training transformation initiatives on joint force readiness. The results of those assessments are intended to help leaders decide strategy modifications and subsequent investments. DOD’s initial 2005 training transformation block assessment did not evaluate the JNTC initiative’s collective impact on joint training. According to DOD officials, this assessment was not expected to provide a comprehensive evaluation of the JNTC initiative’s impact on joint training because the initiative is still early in its implementation. Instead, it served as a baseline or framework for identifying joint training measurements for future assessments, and provided a status of the JNTC initiative’s efforts implemented to date. However, the 2005 assessment did not address training efficiency measured by specific cost, schedule, and outcome- oriented performance metrics. Specifically, the 2005 assessment highlighted some progress: (1) the JNTC initiative is providing more joint training through accreditation and certification; (2) combatant command joint mission essential tasks are addressed in events and integrated into training objectives for each event; and (3) rapidly configurable, persistent training networks, such as the Joint Training and Experimentation Network, are a current reality and are being improved. However, the 2005 Training Transformation Assessment Report noted that because of a wide variation of joint training activities, the task of developing metrics that supported effective assessment and corresponding program status progress was unduly complicated. The 2005 block assessment did identify 10 metrics DOD is considering for its future assessments, such as the 2007 block assessment. These metrics include the percentage of combatant command joint mission essential tasks trained in the joint exercise; the number of programs accredited and certified; and the number of participants using JNTC resources. However, many of these metrics are output oriented and not outcome performance measures, which are necessary to gauge the success of program implementation. Additionally, one of the block assessment’s recommendations is to institutionalize a process to develop metrics for training transformation exercises for use in future assessments. However, it has not finalized its plans for which metrics are to be assessed or identified the time frames and processes it will employ for obtaining data. Because DOD has not finalized its metrics or identified a process to collect the data, training transformation officials stated that it may be difficult to show the impact of JNTC efforts on joint training even in the 2007 block assessment. Without a comprehensive assessment of JNTC’s enhancement of joint training programs, DOD has no assurance that the money invested in the JNTC initiative will produce the desired results of providing combatant commanders with better prepared forces aligned with their joint operational needs or maximize the benefit for DOD’s investment. Even though DOD has yet to assess the overall impact of the JNTC initiative on joint training, our analysis found indications of potential improvements, such as events that include more joint objectives and allow for more joint participation. According to the Chairman, Joint Chiefs of Staff Instruction, joint training is defined as “Military training based on joint doctrine or joint tactics, techniques, and procedures to prepare joint forces and/or joint staff to respond to strategic and operational requirements deemed necessary by combatant commanders to execute their assigned missions. Joint training involves forces of two or more Military Departments interacting with a combatant commander or subordinate joint force commander; involves joint forces and/or joint staffs; and is conducted using joint doctrine or joint tactics, techniques, and procedures.” Based on this definition, we selected several attributes to evaluate the effect the JNTC initiative had on joint training. Specifically, we determined whether selected JNTC events conducted in 2005 reflected the following: persistent capabilities added to exercises funded by the JNTC increased use of joint training objectives, increased use of joint task articles, increased involvement of other services, increased use of virtual and constructive training capabilities, and initiative. DOD officials reviewed the attributes listed above and agreed their use was appropriate in evaluating the effect of the JNTC initiative on joint training. We analyzed 5 of 16 exercises conducted in fiscal year 2005 that the Joint Forces Command helped to fund and enhance through the JNTC initiative. Table 1 describes the exercises selected for our analysis. Enhancements to the exercises brought about by the JNTC initiative were many and varied. Some of the improvements purchased with JNTC funds included radios, aircraft instrument pods, threat emitters that imitated ground base enemy radar, and cruise missile simulators. The Air Force used JNTC funds to help establish an Air Support Operations Center that improved the realism of the Air Warrior I exercise by including real-world joint operational organizations. The Joint Forces Command also used JNTC funds to make improvements in computer models used in the Terminal Fury exercise. Aircraft, including former Russian aircraft, were obtained with JNTC funds to act as opposing forces in the Weapons and Tactics Instructor course. JNTC funds were also used to hire personnel and place them at service and combatant command headquarters to assist in improving the joint environment of existing exercises. Finally, the joint network is supported and funded by JNTC funds, allowing a large number of simulators and constructive models from around the country to connect and interact in support of training programs. To analyze the exercises, we developed a comparative analysis based on the attributes discussed above. This practice allowed us to determine the measure of change in attributes for each selected training exercise prior to fiscal year 2005 and afterwards. We obtained and reviewed exercise documentation, such as exercise planning documents and after action reports for selected exercises to determine the measure of change in the exercise based on our attributes. Our analysis revealed indications that some joint training improvements were made in each of the exercises we assessed. Table 2 summarizes the results of our analysis. Our analysis of the five exercises and discussions held with exercise planners at two of these exercises revealed several key areas in which indications exist that the JNTC initiative has begun to improve joint training. Increased use of joint training objectives. Our analysis found that for four of the five exercise events we reviewed, the services increased the number of joint training objectives to which they trained. For example, prior to being enhanced by the JNTC initiative, Air Warrior I’s exercise objectives were determined by the squadron commanders and were focused on achieving service-specific objectives. After JNTC, service-specific training objectives were modified to include some joint training objectives, such as conducting a joint, live-fire event within a realistic combat scenario and employing real-time joint and combined fires. In another exercise, according to the exercise planner, JNTC efforts enabled the Navy Fleet Synthetic Training — Joint exercise to include Army and Air Force units in its exercise. Through the participation of the Army and Air Force units, the Navy began including joint interoperability training objectives in exercise planning documents for both Navy Fleet Synthetic Training — Joint 05-2 and 06-1. The National Training Center’s primary focus, both before and after the JNTC initiative, has been on accomplishing service-specific training objectives. However, after the JNTC initiative’s involvement, the National Training Center has added some joint and interoperability tasks in its exercises, although these tasks are subordinate to the service-specific training objectives. Prior to the JNTC initiative, the Marine Corps Weapons and Tactics Instructor course trained to the six functions of Marine Corps aviation, which had some joint aspects. After JNTC designation, the Marine Corps continued to train to the six functions of Marine Corps aviation, but it began using several joint tactical tasks and joint training objectives in the exercise. Increased use of joint task articles. The Air Force and the Navy used joint task articles in enhancing their Air Warrior I and Fleet Synthetic Training — Joint exercises. Joint task articles detail the integrated tasks and steps necessary to provide a specific warfighting capability to a joint force commander and are based on the joint mission essential tasks. Air Force officials compared the task article for close air support with current practices at Air Warrior I and identified deficiencies in the procedures used during these exercises prior to 2005. Steps were then taken to correct the deficiencies which included adding an Air Support Operations Center. Consequently, Air Warrior I exercises are now conducted more in line with close air support joint doctrine. The Navy also made extensive use of the task articles in preparing for its accreditation review. Increased involvement of other services. Four of the five exercises we examined showed that participation had expanded to include more services when compared to years before the JNTC enhancements were included. The fifth exercise was a combatant command exercise that was already joint and did not show an increase in the participation of other services as a result of JNTC efforts. Joint training requires the involvement of two or more services; therefore the JNTC initiative used a variety of means, such as additional funding and the Joint Training and Experimentation Network, to increase the participation of other services in an exercise. As a result, Navy and Marine air units and the staff of the Commander, Third Fleet, participated in the National Training Center/Air Warrior I exercises in 2005. Army and Air Force units participated in the Navy’s Fleet Synthetic Training — Joint in 2005 and 2006. A NATO Airborne Warning and Control aircraft joined the Marine Corps Weapons and Tactics Instructor course exercise in 2005, and a similar unit from the United Kingdom plans to participate in 2006. Increased use of virtual and constructive training capabilities. Our analysis showed that key virtual and constructive training capabilities made possible by the use of the Joint Training and Experimentation Network have had a positive impact on three of the five exercises we examined. The joint network is a persistent, rapidly reconfigurable communications network that connects multiple training sites. According to Navy training exercise planners, the joint network is what allowed the Fleet Synthetic Training — Joint exercises to include Army and Air Force simulators to participate in the exercise. For example, we observed during a recent Fleet Synthetic Training — Joint exercise, Army and Navy operators, using virtual and constructive capabilities, track an incoming missile attack and coordinate a joint response. Without the joint network, the Fleet Synthetic Training — Joint exercises would likely have remained solely a Navy exercise. According to Terminal Fury exercise planners, the joint network improved Terminal Fury by increasing the capacity to include a larger number of constructive models in the exercise. For example, Tactical Simulation is a very large intelligence model used to simulate the entire spectrum of intelligence operations. Prior to the joint network, the Tactical Simulation model was not included in the exercise because the model was too large to transport to Hawaii. The joint network provided the means to connect the Tactical Simulation model to the exercise from its home station in the continental United States. In addition, Terminal Fury participants are spread out over a wide area, including several sites in Hawaii and the continental United States. According to Terminal Fury exercise planners, two tools made possible by the joint network, Video Teleconferencing and Voice Over Internet Protocol, provided the means by which these geographically separated sites could coordinate the execution of the exercise. Finally, according to a Marine Corps official, the joint network has aided the Marine Corps Weapons and Tactics Instructor Course in developing exercise scenarios, executing the exercise, and connecting a virtual Unmanned Aerial Vehicle to the exercise. Persistent capabilities added to exercises funded by the JNTC initiative. All five exercises reviewed received enhancements that will continue to benefit these exercises into the future. Each exercise received a persistent link to the joint network and embedded Support Element staff hired to assist service and combatant command headquarters in adding joint capabilities to their exercises. In addition, the Air Force received radios and aircraft instrument pods for Air Warrior I, computer model improvements were made for Terminal Fury, and the National Training Center received surrogate weapons for its opposing force. All these persistent capabilities were procured with JNTC funds. In addition to the improvements noted above, we also found that the JNTC initiative has reduced some of the travel and transportation costs associated with one of the five exercises we examined. Specifically, a number of the constructive models used in the Terminal Fury exercise are based in the continental United States. In prior years, the hardware and supporting personnel would have to travel to Hawaii to participate in the exercise. Since the joint network connected these models to the exercise from their home stations, there was no need to move the hardware and support staff to Hawaii for the exercise. Finally, there are a number of JNTC efforts under way to further improve joint training. For example, in future iterations of the National Training Center/Air Warrior I exercise, the Air Force would like to use the joint network to include a Joint Surveillance Target Attack Radar System aircraft simulator to create a realistic joint environment. Due to their limited number and the high demand for these aircraft, the planes are not always available to participate in the exercises. The joint network will allow the use of these aircraft simulators in the National Training Center/Air Warrior I exercise by having them participate virtually from their home stations. In addition to increasing the availability of these aircraft virtually in future exercises, the joint network will also reduce the travel, transportation, and fuel costs of deploying and using the actual aircraft in the National Training Center/Air Warrior I exercises. Reserve component members have benefited from JNTC-enhanced training events, but the unique training needs of the reserve components have not been fully considered because the Joint Forces Command has not established an ongoing working relationship with them. Members of the reserve components have potentially benefited from JNTC-enhanced training when they participate in active service- and combatant command-sponsored combat training programs enhanced by the JNTC initiative, such as predeployment and mission rehearsal exercise programs. For example, based on our analysis of five training events enhanced by the JNTC initiative, reserve and guard units and individuals have participated to a limited extent in all but one of the five events. Specifically, Air National Guard personnel participated in a fiscal year 2005 Air Warrior I exercise, Army reservists participated in a fiscal year 2005 National Training Center exercise, Navy reservists participated in a fiscal year 2005 Fleet Synthetic Training—Joint exercise, and Marine Corps reservists participated in a fiscal year 2006 Weapons and Tactics Instructor Course exercise. Office of the Assistant Secretary of Defense, Reserve Affairs officials stated that reserve participation in many of these events occurred, in part, because active duty units were unavailable to fully participate and reserve units were asked to fill in. According to JNTC and service officials, reserves participating in these events may benefit from many of the same JNTC enhancements to the joint training environment as do active forces. To date, Joint Forces Command officials said they have relied on active service components and combatant commands to involve the reserve components in JNTC-enhanced training. In an effort to develop and manage active service and combatant command training programs, the Joint Forces Command has developed formal coordination mechanisms, including liaison officers, planning conferences, and process action teams that involve numerous participants from various organizations within the active service and combatant commands, but these coordination mechanisms do not include reserve personnel. For example, the Joint Forces Command has established on- site liaison officer positions to serve as the active service representative on a daily basis to communicate with the JNTC officials and aid in the development of the business and operational processes related to the JNTC initiative. Currently, all liaison officer positions include representatives from the active services with no representatives from the reserve components. According to Joint Forces Command officials and service liaison officers, these active service liaison officers primarily represent their respective active service components’ needs and issues and do not specifically communicate the needs of the reserve component to Joint Forces Command officials. Active services and combatant command personnel also regularly attend planning conferences to organize upcoming training exercises. These meetings occur periodically throughout the initial, middle, and final planning stages of an exercise, and to date, the Joint Forces Command has not reached out specifically to the reserve components to include them in these planning conferences. The Joint Forces Command has also established nine process action teams organized by functional areas in operations, technical, and program management to discuss JNTC implementation and development. These process action teams perform a vast array of responsibilities, such as developing JNTC event requirements and timelines; defining required operational capabilities in order to fully coordinate live, virtual, and constructive opposition forces into joint training; defining technical goals for data systems that will enable joint selecting advanced training technologies to ensure integration of live, virtual, and constructive components into a seamless joint training environment; and developing all JNTC budget and program activities. According to Joint Forces Command officials, the reserve components are not formally invited to participate in these process action teams. DOD guidance regarding reserve components and joint training requires full integration of the reserve components into every aspect and each stage of the overall process in developing a joint training initiative. For example, the 2006 Quadrennial Defense Review Report specifically highlights the need for joint training to include the reserve components in ensuring the readiness of the total force. In addition, the Training Transformation Strategic Plan identifies that the reserve components face several unique training requirements and circumstances that must be considered at each step of this process, from strategic planning through implementation. Further, the 2005 training transformation block assessment calls for including the reserve components’ training in transformation training events. Specifically, the assessment states that the reserve components (1) should participate in training transformation events in order to integrate the reserve component with the active component and (2) may have special needs for training, and training events should be tailored to meet these needs. During discussions with Office of the Assistant Secretary of Defense, Reserve Affairs, officials, they noted the following unique reserve training circumstances that should be considered when developing the JNTC enhancements: Geography. Since members of the National Guard and reserves are often not physically located at their respective home duty stations, the scheduling of training is more complex. Limited training time. Reservists are constrained to 39 training days per year. Only if a reservist is activated or volunteers can he/she exceed this limitation. Competing requirements. Reservists must complete training requirements similar to the active core training requirements, such as general military training and physical training, as well as satisfying any other reserve requirements. Reservists must also consider and manage their civilian careers along with their military obligations. Limited training assets. Resources, such as classrooms and computer simulation systems and networks for joint training (such as those that enable live, virtual, and constructive participation), are not readily available to National Guard members and reservists. Lack of training predictability. Since reserve components are currently not included in the scheduling of joint training events, planning for joint training opportunities is much more difficult and erratic. Along with these unique training requirements, National Guard Bureau officials stated that some of the National Guard’s missions, such as homeland defense and responding to natural disasters, should be included as a part of the JNTC initiative that currently are not. As a result of the absence of formal reserve component representation in the development of the JNTC initiative, the unique characteristics of the reserve component have not been incorporated into the initiative’s development of joint training requirements. According to Joint Forces Command officials, the inclusion of unique reserve component training needs into the JNTC initiative is a long-term goal. To date, there has been no specific effort made by the Joint Forces Command to develop joint tasks or technical enhancements associated with the needs and missions of the reserve components. The JNTC initiative’s priority remains on active services and combatant commands, as the development of joint tasks and technical enhancements has been primarily for existing active service and combatant command training programs. According to Joint Forces Command officials, the process for the development of joint articles has involved the active services and combatant commands and focused on developing tasks for combat missions, such as close air support, joint force targeting, and joint fires. Although reserve members deploying to overseas operations are expected to perform these combat tasks as appropriate, Joint Forces Command officials have stated that the development of joint articles has not significantly focused on tasks unique to the reserve components, such as disaster relief and homeland defense. Further, the reserve components were not included in the team responsible for the development of joint articles. Additionally, the development of the Joint Training and Experimentation Network has established permanent capability throughout the continental United States at active service and combatant command facilities. The joint network has been coordinated with existing active training networks, such as the Navy’s Continuous Training Environment, according to Navy officials, and the Air Force’s Distributed Mission Operations Center. According to Office of the Assistant Secretary of Defense, Reserve Affairs officials, interfaces with reserve and guard networks have not yet occurred. The continued lack of focus on the joint training needs of the reserve components will limit their ability to enhance their joint training skills. The Joint Forces Command has begun to develop a process of accrediting training programs and joint tasks to facilitate the JNTC goals. However, the command has not (1) placed priority on accrediting training programs related to new and emerging missions, as highlighted in the most recent Quadrennial Defense Review Report; (2) taken steps to ensure that accredited joint training will continue to occur after initial accreditation; and (3) accredited any National Guard-specific training programs. In fiscal year 2005, the Joint Forces Command began a process of accrediting active services’ and combatant commands’ training programs on specific joint tasks, in an effort to facilitate the goals of the JNTC initiative. The intent of the accreditation process is to validate that the training programs can provide the training audience, regardless of location, with a recurring, consistent, realistic environment for the joint tasks being trained. An accreditation review is not an inspection or a report card, but can be compared to accrediting a university, where individual courses of instruction are officially approved. Initially, the JNTC initiative used an event-centric approach that focused on enhancing single designated training events. Starting in fiscal year 2005, the Joint Forces Command began employing a program-centric approach that focused on establishing permanent joint capabilities, which can be used for all rotations of active service and combatant command training programs. Previously, the event-centric approach only provided a limited number of soldiers, sailors, airmen, and marines with an opportunity to experience a JNTC-enhanced joint training event. Specifically, one rotation of the Navy’s Fleet Synthetic Training — Joint exercise would have been enhanced by the JNTC initiative, and the one event would have incorporated enhanced joint capabilities. However, in the program-centric approach, the number of training opportunities using JNTC enhancements significantly increases. Now, every rotation of the Fleet Synthetic Training — Joint exercise has the opportunity to include enhanced joint training. The accreditation process involves several steps, beginning with the nomination process and ending with the Joint Forces Command’s recommendation. The key steps of the accreditation process are summarized below: The Joint Forces Command sends a message to the active services and combatant commands, requesting that they nominate training programs and joint tasks to be accredited. Once the active services and combatant commands submit their training programs for nomination, the Joint Forces Command reviews and selects these programs, and consolidates and prioritizes a master schedule of those nominated programs to include joint tasks to be performed by each program. To familiarize the active services and combatant commands with the accreditation process, the Joint Forces Command’s Accreditation Review Team develops a Web site for each training program and provides training for the services and combatant commands. The Joint Forces Command schedules site visits with cognizant active service and combatant command officials to perform its accreditation review. The Joint Forces Command team conducts the review and makes a recommendation to the Commander, who will grant the appropriate level of accreditation status to that training program on specific joint tasks, in the final accreditation report. Although the Joint Forces Command has begun its accreditation process to facilitate the JNTC goals, it has not emphasized nominating training programs that place a priority on new and emerging missions as stressed in the 2006 Quadrennial Defense Review Report. These new and emerging mission areas include irregular warfare, complex stabilization operations, combating weapons of mass destruction, and information operations, which may emphasize additional skill sets than offensive combat operations, such as cultural awareness training and coordination with other agencies. In the past nomination cycles, there has been no guidance that provides criteria for nominating training programs and joint tasks. In lieu of established nomination guidance, we found that the active services nominated training programs based on several reasons. For example, Army and Marine Corps officials told us they selected programs based on their need to enhance joint tasks for the maximum number of participants. The Navy nominated programs based on their ability to provide joint and coalition training. The Air Force nominated programs based on their perceived gains from adding jointness to the training environment. While there have been no specific nomination criteria, the Joint Forces Command has established criteria it uses for selecting programs once nominated. These criteria focus on (1) programs that address critical joint training issues that are affecting warfighting capabilities; (2) the mission of organizations that will receive joint training; (3) programs that provide predeployment training; and (4) joint throughput, or the number of multi-service and joint units that can be trained on required joint training. These criteria do not emphasize skill sets required for new and emerging mission areas. Currently, the Joint Forces Command is in the process of developing guidance for future use that will provide criteria for nominating programs. These criteria ask active services and combatant commands to nominate programs that have the following traits: (1) primary training audience composed of units or staff; (2) established system for providing training feedback; (3) established training cadre and/or exercise control structure; and (4) realistic threat portrayal (i.e., opposing forces) within the training programs. Additionally, it provides nomination criteria for accrediting the joint tasks within the program. The criteria requires that the joint tasks (1) come from the Universal Joint Task List or the latest approved list of joint tasks, and (2) fall within the normal core competencies and normal training environment of the nominated training programs. Although the Joint Forces Command has proposed nomination guidance, its draft guidance still has not emphasized the need to accredit tasks within active service and combatant command training programs that will improve proficiency in new and emerging mission areas. Until DOD establishes such nomination guidance, new and emerging missions will not be given priority in the accreditation process and thus be able to incorporate the JNTC enhancements. By the end of fiscal year 2005, the Joint Forces Command had conditionally accredited joint tasks in 4 programs and plans to grant accreditation to joint tasks in as many as 23 additional programs by the end of 2006. Most of these training programs focus primarily on traditional combat missions. For example, the Navy’s Fleet Synthetic Training — Joint program has been conditionally accredited on seven joint tasks, including developing and sharing intelligence, conducting joint fires, conducting air and missile defense operations, and conducting defensive counter air operations. Additionally, the Joint Forces Command anticipates that the active services and combatant commands will nominate 3 or 4 additional programs for accreditation in 2007. Table 3 shows the total nominated programs, including the 4 programs conditionally accredited in fiscal year 2005 and the 23 programs planned to be accredited for 2006. The Joint Forces Command has not taken steps to ensure that accredited joint training will consistently reoccur in active service and combatant command training programs. As previously noted, the intent of the accreditation process is to ensure that all units participating in joint training, regardless of location, experience a recurring, consistent, realistic joint environment. In addition, DOD has directed the services to conduct joint training to the maximum extent possible in accredited exercises. As previously noted, in fiscal year 2005, the Joint Forces Command began to transition its JNTC initiative from an event-centric approach to a broader program-centric approach, focusing on establishing permanent joint capabilities, which can be used for all rotations of training events, not just a single designated training event. However, the Joint Forces Command has not taken steps to ensure that joint tasks previously accredited will consistently be incorporated in future service and combatant commander training events. According to DOD officials, the services and combatant commands should participate in the accreditation process in order to obtain JNTC funding for their nominated training programs. However, according to a Joint Forces Command official, the command cannot require the services and combatant commands to train to the joint tasks that have been accredited. Service officials we spoke with stated that currently there are no consequences for them not continuing to include accredited joint tasks in future training rotations. While service officials recognized the value of training to accredited joint tasks, they also recognized that there are competing demands for their time and resources that may preclude them from training to joint tasks. Situations that compete for their time and resources include service-specific unit training requirements, shortage of training funds, and the deployment of personnel and equipment to overseas operations. While the Joint Forces Command provides financial contributions to the services to help offset the costs associated with incorporating the JNTC enhancements, it is not clear if the JNTC initiative’s financial contributions are significant enough to function as leverage to encourage the repeated training of accredited joint tasks. For example, an Army official stated that the Army has budgeted $640 million to support its combat training centers in fiscal year 2006, and that the Joint Forces Command’s support for the Army’s combat training centers amounts to $11.6 million. The Joint Forces Command is taking a proactive step to help support the active services and combatant commands in embedding JNTC enhancements in their training programs. It is hiring Support Elements— JNTC representatives placed permanently at service and combatant command training programs—to help ensure that the program officials implement the JNTC initiative by creating a supporting relationship between organizations. Additionally, the Support Elements are to assist program officials with joint training planning and executions at their locations and ensure that standards are maintained in accreditation reviews. However, according to JNTC officials, these individuals alone may not be able to ensure that accredited joint training will continue to occur. Furthermore, it is too early to determine if the services will continue to include joint tasks on a regular basis, since the Joint Forces Command only began the accreditation process in 2005 and only recently established positions to be filled by Support Element representatives. The Joint Force Commander plans to reaccredit training programs every 3 years but has not established criteria for their reaccreditation process that would ensure that services and combatant commanders continue to incorporate and expand on previously accredited joint tasks. According to the JNTC Accreditation Concept of Operations, a reaccreditation process will be used to reaffirm accredited status upon expiration (following 3 years) or determine the status of a training program that has undergone such significant change that the existing program is considerably different from the program that last received accreditation status. However, this concept of operations does not address what standard of training needs to be accomplished or what level of accredited tasks should be trained to receive reaccreditation. Without providing adequate reaccreditation guidance, the Joint Forces Command may risk not accomplishing the intent of JNTC’s accreditation efforts. Moreover, until DOD establishes standards for reaccrediting training programs that ensure the consistent incorporation of JNTC enhancement in future training rotations, DOD risks not maximizing its investment in the JNTC initiative. DOD encourages the integration of the reserve components into joint training. Specifically, the 2006 Quadrennial Defense Review Report reinforces the need for joint training to include the reserve components in ensuring the readiness of the total force, and a DOD directive on military training says that to the maximum extent possible, all components shall conduct joint training in accredited events. Our analysis found that the National Guard has developed joint training exercise programs dealing with missions involving homeland defense and security. However, no National Guard training programs have currently been considered for JNTC accreditation. Joint Forces Command officials stated they have not placed a priority on involving the National Guard in the JNTC accreditation process, and incorporating the National Guard into the JNTC initiative is still a long-term goal for the Joint Forces Command. The Joint Forces Command has not sent request messages seeking nominations for joint training accreditation to the National Guard as it has done for the active services and combatant commands. In addition, we found that the Joint Forces Command has not established a process for nominating and accrediting National Guard-specific training programs. The National Guard Bureau has approached the Joint Forces Command about considering the Vigilant Guard training program—a series of training exercises that will further enhance the preparedness of the National Guard to perform roles and responsibilities related to homeland defense and defense support to civil authorities—for the JNTC accreditation process. The training program involves 4 to 6 states per event with a focus on the training and coordination of the newly established state joint force headquarters and state joint task forces. Vigilant Guard provides the National Guard the opportunity to execute core joint tasks, such as (1) acquire and communicate operational-level information and maintain status; (2) establish, organize, and operate a joint force headquarters; and (3) provide theater support to other DOD and government agencies. However, National Guard officials stated that Vigilant Guard has not yet been considered for accreditation by the Joint Forces Command. National Guard Bureau officials have also recently discussed with the Joint Forces Command officials the potential for linking the National Guard’s GuardNet network to JNTC’s joint network. GuardNet is a network for delivering telecommunications services to National Guard users in 54 U.S. states and territories, providing persistent connectivity. It consolidates video and data functions to support simulation, training, mobilization command and control, and computer emergency response, in addition to operational missions assigned to the National Guard. These telecommunications capabilities have helped to reduce stress on the National Guard force by decreasing personnel travel and increasing home station time available for training. To date, National Guard officials stated that GuardNet has not been integrated into JNTC’s joint network design. Although Joint Forces Command and National Guard officials have had meetings regarding the inclusion of both Vigilant Guard and GuardNet into the JNTC joint network, National Guard Bureau officials stated that no action has yet been taken. Without specific JNTC-accredited training programs and linkages with JNTC’s joint network, National Guard training programs may not be able to take full advantage of JNTC resources, such as participation from other components, access to new technologies and modeling, and training environments that realistically portray overseas and domestic joint operations. In the new security environment, U.S. forces are conducting significantly more complex operations requiring increased joint interoperability among participants in the theater and on the battlefield. DOD’s JNTC initiative is designed to help the services and combatant commands meet these challenges. Without thoroughly assessing the progress of the Joint Forces Command’s training transformation efforts, DOD does not know the value added to the readiness of services and combatant commands resulting from the significant investment of resources devoted to the JNTC initiative. Furthermore, recent domestic events and ongoing overseas operations have placed extremely high demands on the reserve components, which play a critical role in executing our national defense strategy. Once mobilized, reservists and National Guard members operate in the same joint environment as active service members. Unless the reserve components receive the training necessary to allow them to operate seamlessly in this environment, reservists may be unprepared to face the full range of responsibilities they are called upon to perform both at home and abroad. Until the Joint Forces Command embraces the reserve components, incorporating their unique training needs into the development of the JNTC initiative’s joint training enhancements, the reserve and the National Guard forces will not be able to take full advantage of the enhanced joint training offered through this initiative. Additionally, without clear criteria to guide the accreditation and reaccreditation process, DOD will have no assurance that the joint training initiative reflects DOD’s training priorities on new and emerging threats or that the services and combatant commands will continually take advantage of the resources and capabilities provided by the JNTC initiative. Without consistently training its forces in a recurring, realistic, joint operating environment, DOD will lack assurance that forces deployed to its theaters will have the necessary skills to operate effectively in today’s complex, multinational, interagency operations. Also, without incorporating the National Guard into the accreditation process, DOD has no assurance that the National Guard will experience realistic overseas and domestic joint operational training environments portrayed by JNTC enhancements. Furthermore, DOD needs to address the issues highlighted above in order to ensure that the joint training benefits from its $1.5 billion investment in the JNTC initiative are being optimized. To further enhance the quality of joint training and to increase the benefits of the JNTC initiative for the reserve components, we recommend that the Secretary of Defense take the following five actions: direct the Under Secretary of Defense for Personnel and Readiness to fully develop a strategy for the next training transformation assessment to evaluate the overall impact of the JNTC initiative’s implementation on joint training, including time frames, outcome-oriented performance metrics, roles and responsibilities, and outcomes; direct the Joint Forces Command to establish liaison officers for the reserve components and include representatives from the reserve components as active participants in JNTC working groups and planning sessions; direct the Under Secretary of Defense for Personnel and Readiness to establish guidelines for the services and combatant commands to follow when nominating programs for future accreditation that reflect the importance of new and emerging missions, as emphasized by DOD’s 2006 Quadrennial Defense Review Report; direct the Under Secretary of Defense for Personnel and Readiness to establish reaccreditation standards and criteria that will ensure that a recurring, consistent, realistic joint training environment exists for all units participating in accredited joint training programs; and direct the Under Secretary of Defense for Personnel and Readiness to expand the accreditation process to include National Guard training programs. In written comments on a draft of this report, DOD agreed with four recommendations and partially concurred with one recommendation to establish reserve liaisons. DOD’s comments are reprinted in appendix II. Specifically, DOD agreed with our recommendation that the department develop a strategy for evaluating the overall impact of the JNTC initiative as part of its 2007 training transformation assessment. DOD stated that it is in the process of developing a plan for its 2007 assessment that will include detailed metrics and roles and responsibilities and will address the impact of transformation initiatives on DOD-wide training. DOD also agreed with our recommendations to (1) establish guidelines that emphasize the need for the services and combatant commands to consider new and emerging issues when nominating programs for accreditation, (2) establish reaccreditation standards and criteria, and (3) expand the accreditation process to include National Guard training programs. DOD stated that its accreditation guidance will be refined to include consideration of new and emerging missions during the next phase of accreditation reviews. Additionally, DOD stated that the Joint Forces Command will ensure that the accreditation concept of operations is strengthened to include specific reaccreditation standards. Further, DOD stated it will add National Guard training programs with the appropriate joint environment to the accreditation nomination list. Moreover, it noted that the JNTC Joint Management Office is actively discussing this action with National Guard leadership to develop a plan for inclusion of National Guard joint training programs. Finally, DOD partially agreed with our recommendation that the Joint Forces Command establish liaison officers for the reserve components and include reserve component representatives as participants in JNTC working groups and planning sessions. The department agreed it should establish liaison officers for the National Guard and include Guard representatives as participants in JNTC working groups and planning sessions. However, DOD stated that the joint training requirements of the other reserve components are adequately addressed through the current service liaison officer structure within JNTC and the assigned reserve Joint Warfighting Center. DOD’s approach would require that the Army, Air Force, Navy, and Marine Corp reserves continue to voice their training needs indirectly through their respective service headquarters rather than through direct participation. However, as discussed in this report, Training Transformation documents and officials from the Assistant Secretary of Defense’s Office of Reserve Affairs have recognized that the reserve components have some unique training requirements and that these requirements have yet to receive priority in the Joint Forces Command’s JNTC initiative. We continue to believe that all reserve components would benefit if the Joint Forces Command would establish liaison officers for both the National Guard and the service reserve components and include them as active participants in JNTC working groups and planning sessions to allow them to voice their unique training needs and enhance their awareness of new developments and opportunities in joint training. We are sending copies of this report to the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, and the Commander of the U.S. Joint Forces Command. We will make copies available to others upon request. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions regarding this report, please contact me at (202) 512-4402 or stlaurentj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To determine the extent to which the Joint National Training Capability (JNTC) initiative has improved the ability of the services and combatant commands to train jointly, we analyzed 5 of the 16 training exercises selected by the Joint Forces Command to incorporate JNTC enhancements in fiscal year 2005. We reviewed relevant exercise planning documents pertaining to the exercises and JNTC enhancements. We also discussed the impact of the JNTC initiative with a variety of officials in the Office of the Secretary of Defense, service headquarters, combatant commands, and Joint Chiefs of Staff who were involved in this effort. Additionally, we discussed our methods, the attributes to be assessed, and the information collected with agency officials and determined that they were sufficiently reliable for our purposes. Results from nonprobability samples cannot be used to make inferences about a population, because some elements of the population have no chance of being selected. Specifically, we did the following: To select the five exercises, we reviewed the descriptions and training profiles provided by JNTC initiative officials, and in consultation with these officials, we identified one event selected by the Joint Forces Command to be enhanced by the JNTC initiative from each of the military services and one sponsored by a combatant command. To analyze the exercises, we identified attributes that would allow us to quantitatively discern the differences in selected exercises prior to their JNTC designation and afterwards. We then developed a detailed data collection instrument to precisely and consistently gather attribute information for comparative analysis. Our analysis of these attributes allowed us to determine the measure of change in a selected fiscal year 2005 training exercise prior to fiscal year 2005 and afterwards. We obtained and reviewed exercise documentation, such as exercise planning documents, and after action reports for selected exercises to determine the measure of change in the exercises based on our attributes. To augment our documentation review of the JNTC initiative’s impact on existing service and combatant exercises, we met with service, combatant command, and JNTC officials to discuss their perspectives on the overall value added to joint training by the JNTC initiative. We then visited and observed 2 of the 5 exercises to obtain a real-time assessment of the past and planned evolution of the exercises and feedback from exercise participants, including the planners. To determine whether the Department of Defense (DOD) had assessed the full impact of the JNTC effort on joint training through its first training transformation assessment, we reviewed and analyzed key DOD and JNTC documents, including the Office of the Secretary of Defense’s 2006 revised Training Transformation Implementation Plan, the 2005 Training Transformation Assessment Report, and the JNTC initiative’s strategic plan. Additionally, we met with Office of the Secretary of Defense officials directly involved in conducting the training transformation assessment to discuss the methodology for the current assessment and plans for future assessments. To determine the extent to which the reserve components are benefiting from the JNTC initiative, we obtained and analyzed key DOD and JNTC documentation, including the Office of the Secretary of Defense’s 2006 revised Training Transformation Implementation Plan, the 2006 Quadrennial Defense Review Report, and the JNTC strategic and implementation plans, to identify program guidance on the inclusion of the reserve components in training transformation initiatives and assess the level of coordination established between the JNTC initiative and the reserve components. We also examined the extent to which the reserve components participated in JNTC current events and formal collaboration mechanisms to further evaluate the effectiveness of the program to benefit the reserve components. Additionally, we conducted interviews with key reserve, National Guard, Office of the Secretary of Defense, service, and JNTC representatives to discuss the overall impact of the JNTC initiative on the reserve components. To determine the extent to which the Joint Forces Command has developed an accreditation process that facilitates program goals, we obtained and reviewed key accreditation documentation, such as the Accreditation Concept of Operations, JNTC accreditation program briefing slides, the draft accreditation handbook and DOD’s 2006 Quadrennial Defense Review Report. We also reviewed and analyzed key DOD and JNTC documents, including the Office of the Secretary of Defense’s 2006 revised Training Transformation Implementation Plan and the JNTC strategic plan, to identify program guidance and critical milestones. Additionally, we reviewed selected training programs’ JNTC accreditation reports. To augment our documentation review, we met with service, combatant command, and JNTC officials to discuss the status and intent of the accreditation process. Specifically, we inquired about the status of the accreditation effort, the nomination process, and the reaccreditation process. We also examined to what extent the reserve components participated in the JNTC initiative’s accreditation process. Table 4 lists the organizations and locations we visited during the course of this review. We performed this review from August 2005 through May 2006 in accordance with generally accepted government auditing standards. In addition to the contact named above, Laura Durland, Assistant Director; Fred Harrison; Joe Faley; Bonita Anderson; Angela Watson; Yong Song; Kevin Keith; Susan Ditto; and Rebecca Shea also made major contributions to this report. Defense Acquisitions: DOD Management Approach and Processes Not Well-Suited to Support Development of Global Information Grid. GAO-06- 211. Washington, D.C.: January 30, 2006. Military Training: Funding Requests for Joint Urban Operations Training and Facilities Should Be Based on Sound Strategy and Requirements. GAO-06-193. Washington, D.C.: December 8, 2005. Reserve Forces: Army National Guard's Role, Organization, and Equipment Need to be Reexamined. GAO-06-170T. Washington, D.C.: October 20, 2005. Reserve Forces: An Integrated Plan Is Needed to Address Army Reserve Personnel and Equipment Shortages. GAO-05-660. Washington, D.C.: July 12, 2005. Military Training: Actions Needed to Enhance DOD’s Program to Transform Joint Training. GAO-05-548. Washington, D.C.: June 21, 2005. Military Transformation: Clear Leadership, Accountability, and Management Tools Are Needed to Enhance DOD’s Efforts to Transform Military Capabilities. GAO-05-70. Washington, D.C.: December 17, 2004. Chemical and Biological Defense: Army and Marine Corps Need to Establish Minimum Training Tasks and Improve Reporting for Combat Training Centers. GAO-05-8. Washington, D.C.: January 28, 2005. Military Education: DOD Needs to Develop Performance Goals and Metrics for Advanced Distributed Learning in Professional Military Education. GAO-04-873. Washington, D.C.: July 30, 2004. Reserve Forces: Observations on Recent National Guard Use in Overseas and Homeland Missions and Future Challenges. GAO-04-670T. Washington, D.C.: April 29, 2004. Human Capital: A Guide for Assessing Strategic Training and Development Efforts in the Federal Government. GAO-04-546G. Washington, D.C.: March 2004. Military Training: Strategic Planning and Distributive Learning Could Benefit the Special Operations Forces Foreign Language Program. GAO-03-1026. Washington, D.C.: September 30, 2003. Military Readiness: Lingering Training and Equipment Issues Hamper Air Support of Ground Forces. GAO-03-505. Washington, D.C.: May 2, 2003. Military Transformation: Progress and Challenges for DOD's Advanced Distributed Learning Programs. GAO-03-393. Washington, D.C.: February 28, 2003. Military Transformation: Actions Needed to Better Manage DOD's Joint Experimentation Program. GAO-02-856. Washington, D.C.: August 29, 2002. Military Training: Limitations Exist Overseas but Are Not Reflected in Readiness Reporting. GAO-02-525. Washington, D.C.: April 30, 2002. Defense Budget: Need to Better Inform Congress on Funding for Army Division Training. GAO-01-902. Washington, D.C.: July 5, 2001. Chemical and Biological Defense: Units Better Equipped, but Training and Readiness Reporting Problems Remain. GAO-01-27. Washington, D.C.: November 14, 2000. Force Structure: Army Is Integrating Active and Reserve Combat Forces, but Challenges Remain. GAO/NSAID-00-162. Washington, D.C.: July 18, 2000. Army National Guard: Enhanced Brigade Readiness Improved but Personnel and Workload Are Problems. GAO/NSAID-00-114. Washington, D.C.: June 14, 2000. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
The Department of Defense (DOD) established its Training Transformation Program to ensure combatant commanders that forces deploying to their theaters have had experience operating jointly. The centerpiece of this effort is the Joint National Training Capability (JNTC) initiative, which accounts for 84 percent of the $2 billion the department plans to invest by 2011 to provide a persistent global network that will increase the level of joint training. GAO assessed the extent to which (1) JNTC has improved the ability of the services and combatant commands to train jointly, (2) the reserve components are benefiting from the JNTC initiative, and (3) the Joint Forces Command has developed an accreditation process to facilitate program goals. To address these objectives, GAO obtained and analyzed key DOD and JNTC documents. GAO also reviewed and analyzed 5 of 16 events selected in 2005 as JNTC training events, and observed 2 of those events firsthand. The extent to which the JNTC initiative is improving joint training overall is unclear because DOD has not yet assessed the program's results; however, GAO's review of five JNTC-enhanced training events found indications of some joint training improvements. Prior GAO work and the 2006 Quadrennial Defense Review Report have stressed the importance of performance metrics to gauge program success. While DOD's initial training transformation assessment set a basic framework for measuring future program performance, DOD has not developed a strategy to evaluate the overall impact of the JNTC initiative that includes metrics, time frames, and processes for gathering data. Without such a plan, DOD will not know whether the money invested in the initiative will produce desired results or maximize the benefit for the investment. Reserve units have participated in JNTC training events, but the unique training needs of the reserve components have not been fully considered because Joint Forces Command has not established an ongoing working relationship with them. The Training Transformation Strategic Plan recognizes that the reserve components face unique training requirements and circumstances that must be considered. However, the command has not established a liaison position for any of the reserve components and has not included the reserve components in working groups and planning sessions, as it has done with the active service components and the combatant commands. Until the command incorporates the reserves more fully into the JNTC initiative, the reserve components will continue to have limited ability to enhance their joint training skills. The Joint Forces Command has begun to develop an accreditation process to facilitate the JNTC initiative's goals, but it has not emphasized new and emerging missions, taken steps to ensure that accredited joint tasks will continue in future training rotations, or incorporated the National Guard. The 2006 Quadrennial Defense Review Report declares that training transformation should emphasize new and emerging mission areas, such as irregular warfare and combating weapons of mass destruction. The Joint Forces Command has allowed services and combatant commands to nominate existing training programs to be accredited, but these programs may not reflect the priorities established in the Quadrennial Defense Review Report because nomination guidance does not emphasize the need to accredit programs that will improve proficiency in new and emerging mission areas. Further, no training programs specific to the National Guard are currently being considered for accreditation. Until the department establishes nomination guidance and reaccreditation standards and includes the National Guard in the accreditation process, JNTC events may not reflect DOD's training priorities, the services may not continually incorporate JNTC enhancements into their training exercises, and the National Guard will continue to have limited ability to enhance its joint training skills.